Preparing image for Facebook

I’ve been having pretty good luck with 300kb @ 1280px longest side. I used to get better results with a png, but not anymore :frowning: It’s a constant battle isn’t it. It’d be nice if they gave us some clear guidelines.

Same here, I gave up uploading PNGs, though I could not do better with JPGs. There are available guidelines https://www.facebook.com/help/266520536764594
sort of…
Anyway, anything over 100kb is compressed urging you post rather Malevich’s Balck Square than photographs

2 Likes

Okay, I hadn’t found that page. Thanks. A few weeks ago, I tried about 100 different combinations and that’s how I came up with the 300kb @ 1280px. But now I’ll give it another try at <100kb. Thanks Alex

I’ve been meaning to prepare a handful of test images and to test various resolution and sizes to see how badly fb mangles them. Yet another thing I’ll add to the “new post” file. :slight_smile:

I don’t use facebook, but when I have to strip down an image to the minimum bearable, I normally use jpegoptim. On a mac you can easily install it with homebrew. For convenience I use it within an automator app and an open with shortcut that gets things done quickly from the finder. From the top of my head, there’s also ImageOptim (app) which is quite nice - based on it image_optim - they also have some png alpha thingie, and caesium for mac and win. For critical compression/colour/accuracy maximum ratio, the best I’ve seen is TinyPanda/Jpeg - you can compress till 20 files online, their PS plugin’s expensive!!
BTW, as this site’s user level is quite high I feel the need to be straight-forward: these are 2 cents of someone who knows absolutelly nothing about code, but has two eyes and follows copy paste ancient tradition :robot: {no fat budha}

Somethin’ of the sorts (quality 71) // notice there’s also a size option

for f in "$@" do jpegoptim -d/OPTIM_output -f -m71 -b -P -v --strip-all --all-progressive "$f" echo "$f" done

jpegoptim --help

-d<path>, --dest=<path>
                    specify alternative destination directory for 
                    optimized files (default is to overwrite originals)
  -f, --force       force optimization
  -h, --help        display this help and exit
  -m<quality>, --max=<quality>
                    set maximum image quality factor (disables lossless
                    optimization mode, which is by default on)
                    Valid quality values: 0 - 100
  -n, --noaction    don't really optimize files, just print results
  -S<size>, --size=<size>
                    Try to optimize file to given size (disables lossless
                    optimization mode). Target size is specified either in
                    kilo bytes (1 - n) or as percentage (1% - 99%)
  -T<threshold>, --threshold=<threshold>
                    keep old file if the gain is below a threshold (%)
  -b, --csv         print progress info in CSV format
  -o, --overwrite   overwrite target file even if it exists (meaningful
                    only when used with -d, --dest option)
  -p, --preserve    preserve file timestamps
  -P, --preserve-perms
                    preserve original file permissions by overwriting it
  -q, --quiet       quiet mode
  -t, --totals      print totals after processing all files
  -v, --verbose     enable verbose mode (positively chatty)
  -V, --version     print program version

  -s, --strip-all   strip all markers from output file
  --strip-none      do not strip any markers
  --strip-com       strip Comment markers from output file
  --strip-exif      strip Exif markers from output file
  --strip-iptc      strip IPTC/Photoshop (APP13) markers from output file
  --strip-icc       strip ICC profile markers from output file
  --strip-xmp       strip XMP markers markers from output file

  --all-normal      force all output files to be non-progressive
  --all-progressive force all output files to be progressive
  --stdout          send output to standard output (instead of a file)
  --stdin           read input from standard input (instead of a file)
1 Like

@patdavid I tried the best resolution/format for facebook experiment out a while back, although it sounds like things may have changed since then.

2 Likes

Yeah, it looks like it might be time to try another experiment (and thank you for that link - I’ll make sure to include it!).

Alright, I’ve conducted some tests with my facebook page and most of the sizes/methods suggested in this thread.
I tried:

  1. parameters suggested by @D_W ;
  2. parameters suggested by Harry Durgin;
  3. compression method suggested by @chroma_ghost

For all methods I used portrait orientation image produced in PNG and JPG formats.

  1. @D_W 's parameters yielded the best result. The best file extension is PNG. The best dimension is 2048 px. However, I must say that this 2048px length is not applicable only to the width, as suggested by @D_W in the article. Pursuant the Facebook’s guidelines, and as here tested by myself, 2048px should be the longest side, i.e. which ever is longer: 2048px max width for landscape orientation OR 2048px max height for portrait orientation.

  2. @harry_durgin 's parameters were not the best, though not the worst, comparing to Facebook’s recognized 720/960/2048 px dimensions. Right in the middle as also seen from the figures.

  3. although @chroma_ghost did not suggest any exact parameters, I decided to pick something from his advice too. I tried this https://tinypng.com/ and, my PNG dropped sweet -69% of its size. However, color artifacts were noticeable in saturated areas. (I will also try the ImageOptim later when I have time).

So, the conclusion I’ve made, with your kindly provided advice, is the best parameters for Facebook are: PNG image with 2048px longer side

4 Likes

Dude, thanks so much for taking the time to do this. Looks like I’m going back to using PNG.

On a similar theme, anyone know what is best (size, format?) for Tumblr (I use it for my blog, because it is so easy to insert photos amongst the text, but the sharpness suffers)?

Are there any official(or, at least, any) guidelines available for Tumblr? I personally don’t use Tumblr, but could help to figure out the best paarmeters.

All I’ve found is “If you get persnickety about image widths, keep in mind that images 300px and larger will automatically scale to fit the Dashboard (540px).”

I’m not sure if that means a width of 540 pixels is optimal. Bad news for me if it is.

Looks like 540px wide is the dashboard size image.
High quality should be 1280px wide?

Tumblr Dashboard Image Display Sizes (Updated March 20, 2016):
Photo Post: 540 by 810 pixels for dashboard view. Use 1280 by 1920 pixels for high-res version (except for superwide panoramas).
“Tall”...

Looks like you’ll want to target 540px for the dashboard view (assuming that is where the most notes come from?).

Looks like I need to learn some more about Tumblr :frowning:
Like what the dashboard is, and what notes are.

Thanks for this.

Preparing image for Facebook?
->dev/null
See Facebook

A bit OT, more like an update on re-scale/compression’s methods I mentioned

Either 'cause I’m lazy or stupid (hopefully both) and contrary to what everybody and their dog say, I never sharpen images after downscale, instead I use algos that retain the perceived original sharpness adapted to a smaller size, that’s what I tell doctor Martin.

Anyway, after trying newest and shiniest {cough cough, fart, smile} google’s compression algo Guetzli, which is unbearably slow, changed my WF to use Imagemagick’s convert to batch dowscale images and Mozjpeg’s cjpeg to compress them. Results are tiny MGP motherfucka group of pixels that travel fast through the interweb’s veins :racehorse: :camel::poodle: :goat: :ant: :scorpion:

 
PS
These (mozjpeg) are working well 4 me
cjpeg -quality 84 -dct float -dc-scan-opt 2 input > output

2 Likes

Would you consider posting any examples?

Sure, I’m reusing the ones from PV forums. Not real world samples, but charts, also consider that they’ve all been stripped from any metadata (18 images).

https://drive.google.com/file/d/0Bxtrjp4jb-YsZzF2bUxYeVhfSUE/view?usp=sharing

Methodology, set enviroment covering lamps with red flammable cheap synthetic cloth, loud music, have at hand enough whiskey, ice-cream and belly dancers, if that’s not possible (living on an igloo) beer and old cat with strange disease will do. WHITE ROBE and thick glasses, live stream to YT :stuck_out_tongue:

From photoshop I exported the base png, not included (very big). Also within photoshop I dowscaled with c3c (micro-contrast + 1 sharpness) and exported with tinyJPG; c3c/c3c_plusSH and c3c_tiny samples. On the other hand I downscaled the pngs with Imagemagick convert “f" -resize 1400 "{f%.*}_magic.jpg” (magic samples) and compressed them with mozjpeg, settings above (moz samples), jpegoptim quality 84 (optim samples) and one of them - sorry but takes too long with guetzli guetzli --quality 84 input output Also tried github suggested parallel processing with multiple inputs, still too slow.
Put down the cat, END

2 Likes

Not a solution, but an approach:

I have come to terms with the aggressive compression by simply feeding Facebook the highest quality image and enabling high quality. My rationale is that the way Facebook will handle any given image is a black box and subject to change. I see little point in lowering the quality or otherwise manipulating the source only to have Facebook do its worst on top of that. That said, there are always new methods to explore :sunny:.

I guess the alternative would be to upload the images elsewhere and link them to Facebook. I haven’t explored this yet. It would be great if someone could give me some pointers (maybe in a new thread). I am looking for an option that is free, private, long lasting and plays nice with Facebook.

I’m curious to see where this experiment is going :slight_smile: