Photo Corners headlinesarchivemikepasini.com


A   S C R A P B O O K   O F   S O L U T I O N S   F O R   T H E   P H O T O G R A P H E R

Enhancing the enjoyment of taking pictures with news that matters, features that entertain and images that delight. Published frequently.

Mozilla Introduces New JPEG Encoder Share This on LinkedIn   Share This on Google   Tweet This   Forward This

6 March 2014

In a blog post for Mozilla Research, Josh Aas introduced a new Mozilla project called mozjpeg "to provide a production-quality JPEG encoder that improves compression while maintaining compatibility with the vast majority of deployed decoders."

One of the traditional problems with improving JPEG compression has been, as Aas put it, "it would require going through a multi-year period of relatively poor compatibility with the world's deployed software."

You can compress it, sure, but you can't uncompress it with your usual tools. Like opening a JPEG in Photoshop.

Aas explained the mozjpeg approach:

What we're releasing today, as version 1.0, is a fork of libjpeg-turbo with 'jpgcrush' functionality added. We noticed that people have been reducing JPEG file sizes using a perl script written by Loren Merritt called 'jpgcrush', references to which can be found on various forums around the Web. It losslessly reduces file sizes, typically by 2-6% for PNGs encoded to JPEG by IJG libjpeg, and 10% on average for a sample of 1500 JPEG files from Wikimedia. It does this by figuring out which progressive coding configuration uses the fewest bits. So far as we know, no production encoder has this functionality built in, so we added it as the first feature in 'mozjpeg.'

So the new method provides, on average, a lossless 2 to 10 percent reduction on file size from previous approaches. And you can still open the file in Photoshop.

The GitHub page for mozjpeg v1.0 describes the project:

mozjpeg v1.0 is a fork of libjpeg-turbo with 'jpgcrush' functionality built in.

The 'jpgcrush' feature finds the progressive coding configuration which uses the fewest bits. This most frequently reduces file size by 2-10%, but those are not hard limits. Significantly greater reductions have been observed.

Library configuration defaults are the same as for libjpeg-turbo, in order to make transitions as painless as possible. There are new configuration options for new features, but they are not enabled by default.

The 'cjpeg' program defaults are not the same as for the equivalent program in libjpeg-turbo. The 'cjpeg' defaults for mozjpeg are set to aggressively optimize for smaller file sizes.

You can download the source code from the GitHub page.

Below we've reprised our March 2000 article on JPEG compression, which provides some background on the format. The links are antique but the technology is still vital.

JPEG Revealed

There's a lot more to JPEG image compression than meets the eye. Put on your 3-D X-Ray Digital Zoom glasses and we'll show you.

You'll be amazed to find things like the rarely seen Progressive Display or the even rarer Variable Compression.

And you may even run screaming in terror when you discover what those guys in the Joint Photographic Experts Group were thinking about when they came up with a "lossy" method of file compression.

The advantages are real and, while the disadvantages are also real, JPEG compression (it's really only a way of compressing images, not a file format) won't perceptibly degrade your images -- if you don't abuse it.

It has competition (several different formats using different compression techniques promising better quality and smaller files), including itself (a draft of the JPEG 2000 specification is now available). But we'll look at these products (like MrSID and MT-WICE) another time. First, let's get acquainted with something much closer to home: JPEG.

FILE FORMAT OR ALGORITHM?

Your camera may call your images JPEGs, but actually, it's making files that include JPEG File Interchange Format or JFIF. That basic pixel structure is usually wrapped up in a file format that may be proprietary or based on the TIFF/JPEG specification. If the latter, the Exif specification is probably also at work, providing exposure data in the tagged header.

IMAGE COMPRESSION

But JPEG's job is to compress image files. If you've compressed data files (using ZIP or StuffIt, for example), you may not appreciate a "lossy" compression like JPEG. Every bit or every byte in a data file is, well, critical.

But you don't balance your checkbook with images. You look at images. And if you can't perceive the detail, it's not data, it's disk space. With compression techniques intended for images, you can compress your image files significantly -- without being able to tell (even though you've lost some original data). "I can tell," you yell, but actually, we humans perceive small color changes less accurately than small changes in brightness. Blame the brain.

A SLIDING SCALE

JPEG has the grace to permit you to decide just how much information you are willing to lose in any particular image. If you're building an index of images, for example, file size is more important than image quality. You just need a reminder of what the full image looks like, a sketch. But if you're building an archive of your work, file size will be much less important than quality. You can set JPEG options to achieve either objective.

When you Save As JPEG, your application usually presents a sliding scale of options marked on one end by Compression and the other by Quality. But the scale is unique to each application. In some cases it's numeric (from 0 to 4 or 0 to 100 or 100 to 0), in others it's broadly descriptive (high, medium and low). This lack of standardization means getting the same result in two applications can require magical powers.

But that's bare bones. More sophisticated JPEG filters like ProJPEG present every option as well as a preview of the result of your settings. And being able to save the settings makes repeatability a snap. We'll review ProJPEG shortly, but meanwhile you can download the demo at http://www.boxtopsoft.com/ for either MacOS or Windows.

LOSSY

Should you worry about JPEG's "lossy" compression?

You start losing information from an image the moment you snap the shutter or click the scan button. The real question is where to draw the line.

So how much is too much?

Too much is when you can see the image degrade. You've probably seen this at its worst on the Web where photographic images are sometimes very blocky. The Web master was (perhaps overly) concerned about download times, sacrificing image quality.

But until you see the image degrade, you're just enjoying the benefits of a more efficient file size.

Try setting your JPEG compression around 75 on compression-quality scale of 0 to 100. Save again with settings of, say, 65 and 85. Then open all three after saving. Notice any difference?

The most important caution (other than using a ridiculously low quality setting) is not to save with JPEG compression as you work on an image. Recompressing an altered image loses more information. If you decompress, edit and recompress at the original quality setting, degradation is minimized (especially at low quality settings). But you still lose information.

Don't save as JPEG until you're done editing. Use your application's native format (which often also provides perks like layers and change histories) while you crop and edit the image.

Unless you are rotating the image.

It's possible to rotate a JPEG without losing any information (try Cameraid for the MacOS at http://www.cameraid.com/ or PIE for Windows at http://www.hoju.de/), but unless your application specifically says it performs lossless JPEG rotations and flips (it's a different algorithm than the typical rotate or flip), it doesn't.

LOSSLESS

A word about Lossless JPEG. Mythology. Right up there with the Unicorn and the Cyclops and Atlantis and ... well, you get the idea. Once upon a time there was something called Lossless JPEG but it only compressed to 2:1 and is now largely obsolete and unsupported by common imaging editing programs. Like Atlantis.

JPEG is, after all, about compression. JPEG compression, as you will see below, loses information. Even at the highest quality setting. So if you want lossless, you don't want JPEG.

Having said that (strongly), I must point out that JPEG 2000 is about the quest for a lossless JPEG. Unfortunately for the fable, it doesn't use JPEG compression. Proving the point.

HOW IT WORKS

The JPEG algorithm does three things and in this order:

  1. Removes redundant information from your image (using a, repeat after me, Discrete Cosine Transform).
  2. Deletes parts of the image not critical to human visual perception (aka Quantization).
  3. Compresses the remaining data by exploiting statistical redundancy (all together, Huffman Encoding).

Optionally (but typically), two intermediary steps may also be involved: color space conversion and subsampling.

The blocky nature of a JPEG is the direct result of the discrete cosine transform or DCT which takes each 8x8 pixel block and transforms it into 64 numbers representing the intensity of each pixel block in relation to the others. A scale, that is, of 1 to 64. The first number is the average of all 64 pixels (called the Direct Current or DC); everybody else is a variation of that (or Alternating Current or AC).

So if you have an 8x8 block of blue blue sky, the first number will represent that color and the others (which don't vary from it) will all be zero.

This step provides a description of the image in frequencies (which will be used in Quantization) with values that tend to be larger in the top left corner and increasingly get smaller (which will help compression).

Quantization reduces the numbers that were calculated in DCT by a particular value and then -- drum roll! -- rounds off the result.

Now watch carefully (note there is nothing up my sleeve). If the average for our block is 1024 and we divide it by a particular value of 10, we get 102.4, which rounds to 102. And if we have a 1014, we end up with 101 (101.4 rounded down). Cute. What happens when we have a 1015? We get a 102 (101.5 rounded up). And if we have a 1005, we get a 101 (100.5 rounded up).

You are watching lossy in action.

When you decompress, those original values of 1024, 1015, 1014 and 1005 will be just two: 1020 and 1010 (using the same quantization value of 10). Call that dequantization for extra credit.

Quantization involves one more step, which is putting the values in ascending order so they can be encoded more efficiently. Instead of saying, we have six 102s followed by five 108s and then another three 102s, it puts all the 102s together in front of the 108s.

In the next phase of JPEG compression, Huffman encoding removes as much redundant data (not 102 102 102 102 102 but (5) 102s, to simplify) and calls the most frequently used values by shorter names (nicknames, if you will). This is a little like using "Bill, time for dinner!" rather than "William Jefferson Clinton, time for dinner!" 365 days a year).

A table of these values may also be built at this stage (Huffman optimization).

And that's all there is to it.

Wait a minute, you grab my arm. What about color?! You're just talking about intensity here, brightness. You haven't said anything about color!

And while it's true that the basic procedure could be applied to, say, each channel of an RGB image (to the Red channel, the Green channel and the Blue, individually), that isn't as efficient as it might be. That's where color space conversion comes in.

Luminance or brightness is almost always much more important in our understanding of an image than color or chrominance. (If you've got Photoshop, try it yourself: look at your full color image in Lab Mode and see if the lightness channel isn't easier to recognize than the other two.)

So your RGB image is converted into a different color space (like Lab, which is what Photoshop uses when you convert modes) called YCbCr. Y is the luminance information, Cb is the chrominance-blue and Cr the chrominance-red color information. This conversion helps compression quite a bit because most of the information is moved into the luminance channel, reducing redundant data in the two color channels. The variation in values in the luminance channel is quite a bit higher than in the color channels, which means the color channels can be compressed quite a bit.

Subsampling (we'll be brief) takes advantage of your eye's lower sensitivity to contrast in chrominance. So you can eliminate more color information (using a higher factor) than it would be wise to do to luminance.

DOWN THE ROAD

Still worried about "lossy" compression?

Wavelets and fractals are alternatives to the methods used in JPEG compression. Products employing these techniques promise -- guess what -- better quality and better compression. And we're starting to see them pop up not only in wish lists but in Photoshop plug-ins.

We've been dabbling with MT-WICE (a Photoshop file format plug-in that uses wavelets at http://www.mt.mevis.de/) and MrSID (Multiresolution Seamless Image Database, which also relies on wavelets, at http://www.lizardtech.com/). We'll let you know if we get very excited about either.

Meanwhile for more information about JPEG visit the JPEG FAQ at http://www.faqs.org/faqs/jpeg-faq/ and the JPEG home page at http://www.jpeg.org/ (where you can get a Java implementation of JPEG 2000) and the full story on the extensions for electronic photography at http://www.pima.net/standards/iso/standards/documents/N4378.pdf and visit http://www.butaman.ne.jp/~tsuruzoh/Computer/Digicams/exif-e.html for the Exif format.


BackBack to Photo Corners