Processing images is so much fussier than processing text. Everything seems simple until you need to add in the pictures.

In my quest to make my website directly translateable into a print book, I’ve had to think about how to handle images. On a website you want to keep things small so the reader can download them quickly. But when printing a book you want images to be of the highest quality possible — the size of the image file is no constraint.

So I end up creating and managing two copies of any images I use. I want to keep the high-quality original with my local copy of the post (for later use in the print version), and create a smaller, optimized version for use on the web.

This could easily become very tedious. I needed a way to automate this process.

I finally kluged together an Alfred workflow powered by a couple of shell scripts. Here it is in action:


The code powering this workflow is messy but the result is fast and elegant compared with doing it by hand. When applied to photos taken from my iPhone 5S, the web-optimized image is typically about 90% smaller than the original.

For my websites, I try to ensure that images look good even on Retina screens. My approach to Retina-quality responsive images is simple: serve up the same image to everyone and display it scaled down by at least 2⨉ (or more on smaller viewports). I don’t use server or scripting hacks to try and figure out which version of an image file to serve up, and until the wrinkles get ironed out I won’t touch the new srcset attribute with a ten-foot pole. Making the same image file work for everyone isn’t a terribly resourceful way to use bandwidth, so I compensate by optimizing that image as much as possible without noticeable loss of quality.

Since it’s so specific to my unusual setup and priorities, I probably won’t bother to clean this up enough to distribute it as a packaged workflow, but I can share the most relevant portions of the code easily enough.

After creating a new copy of the image using the short name provided, the workflow script tests to see if the original image is wider than 2,048 pixels wide:

imagewidth=$(/usr/local/bin/identify -format "%w" "$query")
if ((imagewidth > 2048)); then
    /usr/local/bin/mogrify -path "$foldername/" -filter Triangle -define filter:support=2 -thumbnail 2048 -unsharp 0.25x0.08+8.3+0.045 -dither None -posterize 136 -quality 82 -define jpeg:fancy-upsampling=off -define png:compression-filter=5 -define png:compression-level=9 -define png:compression-strategy=1 -define png:exclude-chunk=all -interlace none -colorspace sRGB "$foldername/$newfile"
fi

The parameters for the mogrify command come from the post Efficient Image Resizing With ImageMagick.

I then use the TinyPNG service’s API to further optimize the file. TinyPNG is a great way to optimize both PNG and JPEG images, and an API key is free for up to 500 images per month (plenty for the average blogger). Here’s the code that sounds out the file and then downloads the result (more examples on their developer page):

JSON=`curl -i --user api:YOUR_API_KEY_HERE --data-binary @"$foldername/$newfile" https://api.tinypng.com/shrink 2>/dev/null`

URL=${JSON/*url\":\"/}
URL=${URL/\"*/}

curl $URL>"$foldername/$newfile" 2>/dev/null

Now, this takes several seconds to complete. I don’t know how to display a progress indicator in an Alfred workflow, so I’m doing it audibly by playing OS X’s built-in ping sound effect once just before the upload starts, and once when it’s over:

afplay /System/Library/Sounds/Submarine.aiff

Finally, I place the copies where they need to go: the original gets copied into an image folder next to where I save my writing; and (using scp for uploading to my web server), and copy the Markdown-style link to the clipboard (echo "![](img/$newfile)" | pbcopy).