Pro­cess­ing im­ages is so much fussier than pro­cess­ing text. Every­thing seems sim­ple until you need to add in the pic­tures.

In my quest to make my web­site di­rectly trans­late­able into a print book, I’ve had to think about how to han­dle im­ages. On a web­site you want to keep things small so the reader can down­load them quickly. But when print­ing a book you want im­ages to be of the high­est qual­ity pos­si­ble — the size of the image file is no con­straint.

So I end up cre­at­ing and man­ag­ing two copies of any im­ages I use. I want to keep the high-qual­ity orig­i­nal with my local copy of the post (for later use in the print ver­sion), and cre­ate a smaller, op­ti­mized ver­sion for use on the web.

This could eas­ily be­come very te­dious. I needed a way to au­to­mate this process.

I fi­nally kluged to­gether an Al­fred work­flow pow­ered by a cou­ple of shell scripts. Here it is in ac­tion:


The code pow­er­ing this work­flow is messy but the re­sult is fast and el­e­gant com­pared with doing it by hand. When ap­plied to pho­tos taken from my iPhone 5S, the web-op­ti­mized image is typ­i­cally about 90% smaller than the orig­i­nal.

For my web­sites, I try to en­sure that im­ages look good even on Retina screens. My ap­proach to Retina-qual­ity re­spon­sive im­ages is sim­ple: serve up the same image to every­one and dis­play it scaled down by at least 2⨉ (or more on smaller view­ports). I don’t use server or script­ing hacks to try and fig­ure out which ver­sion of an image file to serve up, and until the wrin­kles get ironed out I won’t touch the new srcset at­tribute with a ten-foot pole. Mak­ing the same image file work for every­one isn’t a ter­ri­bly re­source­ful way to use band­width, so I com­pen­sate by op­ti­miz­ing that image as much as pos­si­ble with­out no­tice­able loss of qual­ity.

Since it’s so spe­cific to my un­usual setup and pri­or­i­ties, I prob­a­bly won’t bother to clean this up enough to dis­trib­ute it as a pack­aged work­flow, but I can share the most rel­e­vant por­tions of the code eas­ily enough.

After cre­at­ing a new copy of the image using the short name pro­vided, the work­flow script tests to see if the orig­i­nal image is wider than 2,048 pix­els wide:

imagewidth=$(/usr/local/bin/identify -format "%w" "$query")
if ((imagewidth > 2048)); then
    /usr/local/bin/mogrify -path "$foldername/" -filter Triangle -define filter:support=2 -thumbnail 2048 -unsharp 0.25x0.08+8.3+0.045 -dither None -posterize 136 -quality 82 -define jpeg:fancy-upsampling=off -define png:compression-filter=5 -define png:compression-level=9 -define png:compression-strategy=1 -define png:exclude-chunk=all -interlace none -colorspace sRGB "$foldername/$newfile"
fi

The pa­ra­me­ters for the mogrify com­mand come from the post Ef­fi­cient Image Re­siz­ing With Im­ageMag­ick.

I then use the TinyPNG ser­vice’s API to fur­ther op­ti­mize the file. TinyPNG is a great way to op­ti­mize both PNG and JPEG im­ages, and an API key is free for up to 500 im­ages per month (plenty for the av­er­age blog­ger). Here’s the code that sounds out the file and then down­loads the re­sult (more ex­am­ples on their de­vel­oper page):

JSON=`curl -i --user api:YOUR_API_KEY_HERE --data-binary @"$foldername/$newfile" https://api.tinypng.com/shrink 2>/dev/null`

URL=${JSON/*url\":\"/}
URL=${URL/\"*/}

curl $URL>"$foldername/$newfile" 2>/dev/null

Now, this takes sev­eral sec­onds to com­plete. I don’t know how to dis­play a progress in­di­ca­tor in an Al­fred work­flow, so I’m doing it au­di­bly by play­ing OS X’s built-in ping sound ef­fect once just be­fore the up­load starts, and once when it’s over:

afplay /System/Library/Sounds/Submarine.aiff

Fi­nally, I place the copies where they need to go: the orig­i­nal gets copied into an image folder next to where I save my writ­ing; and (using scp for up­load­ing to my web server), and copy the Mark­down-style link to the clip­board (echo "![](img/$newfile)" | pbcopy).