From darktable to social media

This is a global answer to people asking about the best export options in darktable to post to social media.

Some introduction

Social media remove pretty much all EXIF put into your pictures, including the ICC profile that you may have embedded there. They simply don’t care. Also, if your picture doesn’t fit the max resolution and max weight allowed by the platform, it will be harshly downsized with the very destructive algos of the platform. Again, they don’t care.

So the rule of thumbs is to not be zealous about image quality and serve them just what they ask. Besides, people will most likely look at your pictures out of boredom while taking a number 2, on a shiny screen full of finger grease and Escherichia coli. So… don’t spend too much time on that, it’s not worth it. Work for prints, work for your own website but don’t bother too much about quality for social platform.

Editing

If I was really naughty, I would say that editing for social media should be +100 of local contrast, +100 of global contrast, +100 of saturation and +100 of sharpness. It seems you can’t go wrong by cranking everything to the top if you are after those precious likes. :man_shrugging:

One important module to enable is the dithering. This will help preventing some “banding effect” when using JPEG compressions (mostly by making compression harder, hence increasing file size…). You can put that in a style to enable only when exporting to web JPEG. Just be aware that banding can still happen because you went too harsh on the JPEG compression, though.

Exporting

Color space

Export in sRGB, with perceptual intent.

If you export for the World Wide Web, outside of cheap platforms that will strip EXIF, you could technically export to whatever color space you want, provided the ICC profile is written in the file metadata, since nowadays pretty much all browsers have color management and can convert whatever color space to your display space. darktable always write the color profile in metadata. However, once the EXIF have been stripped away, as Instagram and Facebook will, the ICC standard says “assume it’s sRGB”. So, no profile means the file is interpreted as sRGB. Thus, encode it to sRGB at export for safety.

But, for your own website or for photographer-centric platforms, any color space would do as long as EXIF are kept.

Resolution / pixel count

Double-check that, because it changes all the time, but in 2021 it seems pretty much all platforms have converged toward a max size around 1080×1080 px. Remember, if you send larger files, the platform will shrink and melt them with destructive compression, so just send whatever they ask compressed in-house with reasonable algos, but nothing more.

File format

JPEG.

The tricky thing with JPEG is it tries to achieve high compression rate by crushing smooth gradients first. That’s why banding artifacts happen in smooth skies and blurry backgrounds first.

A quality factor of 85 usually gives a good trade-off between size and quality with unnoticeable quality destruction. If, even with dithering, you find your smooth zones to have banding, either increase the factor (to increase the quality) or try to add some noise/grain to trick the compression algo with high frequencies.

WebP would be so much better, and Facebook is supposed to support it, however the latest version of Apple Safari only has partial support for it since sept. 2020, and support for other platforms became ubiquitous since 2019 only. Also, in case a platform doesn’t support it, you probably don’t want to rely on FB to provide a JPEG fallback.

Metadata

You can try embedding copyright and title of the picture in the metadata, just in case they keep them, but anything else will be stripped away, so don’t bother. In any case, remove the GPS tags if any, so the GAFAM can’t get too much data on you.

File size/weight

Until 2018, it was said that the max weight for Facebook images was 100 kB if you wanted to avoid extra compression. Past that threshold, FB would apply another pass of ugly compression.

I have no idea if it is still up to date, but remember the EXIF/XMP can weigh more than 40 kB if you also include the development history. In any case, for a 1080×722 px image, you would need to go as low as 80-82 for the quality compression to match that threshold.

Remember that the compression ratio is contextual. Pictures with a lot of high frequencies (landscapes at f/8 and less, noisy/grainy pictures, etc.) will not be compressed as much as smooth pictures, even with the same compression quality factor.

It can therefore be worth a try to reduce the resolution in order to keep the compression minimal and preserve gradients, rather than having 2048×2048 px of pure banding artifacts at quality = 60.

However, you have no control over the resized/cropped thumbnails that the social media might serve.

Bonus : how to check what social media do to EXIF ?

You can try to upload a test picture to some social media with full EXIF, and then download the final file and analyze it with exiftool. Here is an example from Instagram:

$ exiftool 135383691_845786625998420_1798965656672895089_n.jpg 
ExifTool Version Number         : 12.00
File Name                       : 135383691_845786625998420_1798965656672895089_n.jpg
Directory                       : .
File Size                       : 231 kB
File Modification Date/Time     : 2021:02:03 03:15:43+01:00
File Access Date/Time           : 2021:02:03 03:15:44+01:00
File Inode Change Date/Time     : 2021:02:03 03:15:43+01:00
File Permissions                : rw-rw-r--
File Type                       : JPEG
File Type Extension             : jpg
MIME Type                       : image/jpeg
JFIF Version                    : 1.01
Resolution Unit                 : None
X Resolution                    : 1
Y Resolution                    : 1
Current IPTC Digest             : 1e43db3711d76ed95c3c5e43963c08bd
Special Instructions            : FBMD2300096a010000574400002e6f0000ce850000a3730100aebc0100723e0200048c020066ce0200529c0300
Image Width                     : 1080
Image Height                    : 1080
Encoding Process                : Progressive DCT, Huffman coding
Bits Per Sample                 : 8
Color Components                : 3
Y Cb Cr Sub Sampling            : YCbCr4:2:0 (2 2)
Image Size                      : 1080x1080
Megapixels                      : 1.2

No copyright, no nothing. They own your pics.

14 Likes

To show social cares even less about you than Aurélien says, they change the image requirements and policies on the fly.

Online you often see articles to the effect of “Facebook 2021 Photo Uploading Specs”. Don’t trust the specifics because they would have already changed. (That said, they can be helpful because they do get into a bit of the nitty-gritty on the various circumstances that you should be aware of; such as where and how you are uploading to the platform, for what purpose. I.e. the specs for a header image is different on a Page, Event and Feed. Sometimes you have to upload 1 image that caters to multiple use cases!) Don’t trust Facebook documentation either. It may not be accurate at the time of access, and usually too vague to begin with. They will crop, destroy your aspect ratio, clip and distort your videos any way they wish at whatever time they wish.

The best way to combat this is to try everything and see what sticks at the moment. (The advice above to upload a test image to see what happens to your EXIF is good. Not just check your EXIF but also view your image under different conditions and systems to evaluate how your image is affected.) Yes, put your OCD googles on, time down the drain and axe to break the glass by trying every single method and see how the responsive design works on all browsers, OSes and systems and orientations with various add-ons and edge cases. Or just don’t care. Be vain and bare your soul to the internets.

Bonus

A couple of years ago, I read on the news that a woman’s colleague found a “sexy” advert of her by word of mouth. The thing was that she never uploaded such a photo and was happily married with no second thoughts. So where did this image come from? Why was it edited to be “sexy”? Why did it have personal info? Short answer: because FB sells your info and likeness without reserve. You are their product.

FB was just an example. I have no beef with them. But seriously, know what you are getting into: stay safe digitally. :male_detective:


Sorry for the multiple edits.

And read the fine print: At least a number of years ago Instagram (& facebook?) had wording in their conditions of use where you granted them a royalty-free, transferable, world-wide license to use and sub-licence content you uploaded, with no financial counterpart (with some exceptions for content marked private)…

:rofl:

This results in sharper images on Facebook compared to JPG:
image

That’s most likely because FB handles the compression itself, directly from HQ images, instead of recompressing something that was already compressed before. But Instagram, for instance, doesn’t accept PNG files IIRC.

I believe this goes for more than photos. If you upload documents to Google drive I believe they have the right to scan them for content and use them…ie again they own your files even though they say you do… so really its not free space

" Your Content in our Services: When you upload or otherwise submit content to our Services, you give Google (and those we work with) a worldwide licence to use, host, store, reproduce, modify, create derivative works (such as those resulting from translations, adaptations or other changes that we make so that your content works better with our Services), communicate, publish, publicly perform, publicly display and distribute such content."

One big difference is that they don’t talk about transferring the license or sublicensing…
They need some permissions to publish your work on their services, after all.

Basically, Google wight have a hard time selling your content on request of a 3rd party, which the (then valid) Instagram license explicitely allowed…

I don’t want to get in a discussion about which of those conditions are “right” or “just”, but it is something to be aware of when uploading content (images, in the case of this forum)

As with many of these agreements they are sufficiently vague. And as you said you just need to be aware. It is funny on one hand they they respect your privacy and then on the other say if you share something its fair game as that is essentially the public domain….they may not use those words……this is part of their current terms….in Canada anyway…looks like there are country specific version of the documents….

On JPG compression: a trick I use is to find the maximum compression that give no_more than a certain difference from my source file.

For “diffference”, I use a threshold of 1% RMSE. With ImageMagick:

f:\web\im>%IMG7%magick toes.png -quality 80 x.jpg

f:\web\im>%IMG7%magick compare -metric RMSE toes.png x.jpg NULL:
1013.35 (0.0154627)

The RMSE is 0.015 on a scale of 0 to 1, so 1.5%, so too far from the source to be acceptable.

I do the work in a CMS program, but it is easily scripted.

Delta E, PSNR, RMSE, SSIM… are all imperfect metrics that all suck at predicting human vision at some point. Honestly, looking at 2 JPEG encoded at 80 or 95, I can’t tell the difference at 1:1 zoom for most pictures. If you need some forensic computational method to tell the difference, it’s probably because you don’t see it, so it’s fine with me.

Also, posting shitty pics on social media is an excellent excuse to keep selling HD prints. Good for business too. Again, seeing how and how much visual people consume and immediately forget, these days, I’m not too bothered with social media RMSE.

1 Like

I hacked together a simple bash script that runs a for loop based on imagemagik for each full size jpg in a temp folder.
Basically it runs (all values up to personal liking)

convert -resize 1000x1000 -define jpeg:extent=248KB -unsharp 0.4x0.6 infile outfile

The sweet thing is the -define jpeg:extent which causes convert to approach the demanded max file size in an iterative way by increasing the quality of the compression bit by bit. If the outfile size gets larger than the required max size, the last fitting value is taken.

Other things to play around with are -border and -bordercolor as some sites have these black backgrounds and I like to have the white frame around the picture (thank you for the color assessment mode in dt)

I guess if you would trust a measurement instead of your own eyes, i think butteraugli is the way to go. But still… Always use your eyes.

And start using something like mozjpeg if you care that much about jpeg efficiency.

I’m skeptical that you can upload something without it ever being recompressed. If it gets recompressed anyway, just uploading the best source that is not too grainy and not too sharp is probably your best bet.

Not a social media user here, let alone a poster. So what do I know.

btw: I think the hashtag darktable is quite active on Instagram, apparently it is possible to get views and favs with it.

Forget webp, I’m waiting for AVIF :slight_smile:

1 Like

My experience is that Facebook won’t downsize your pics as long as they’re at 2048px or less on the long edge, but they will definitely slaughter your image with their compression algorithms regardless. Even if you download your photo from FB and repost that same photo they will compress it once again, so resistance is futile.

My workaround is to send the picture to Flickr and post the link on Facebook. That way, if someone wants look at my pictures while taking a number 2, on a shiny screen full of finger grease and Escherichia coli, then they have that option.

And thanks for planting that thought in my brain @anon41087856

I follow it and tag my photos too. There are the occasional moody food shoots that are not related to our editor :sunglasses: A more recent #darktableedit tag is also active.

Yea, you are right, those food photographers are spamming the hashtag, especially the front page.

Apparently there are several restaurants named “Dark Table”…

Something to note, not only for Social Media but also your personal homepage or flickr: since jpeg does only specify the stream and decoder there is room for improvement in the encoder. One of the better ones is MozJPEG Github, short demo/features.
Another one (but with ginormous runtimes) is Googles Guetzli Github
And here is a comparison between Guetzli and amongst others MozJPEG: Guetzli vs. MozJEPG

If SSIM is not good enough of a quality metric for your taste, there are alternatives:

  • Guetzli uses an Image Quality Assessment (IQA) that is called Butteraugli and optimizes its output for a high IQA score. Butteraugli Github
  • And netflix’ VMAF (for video formats though but somewhat open source with a BSD2+patent). Github
  • There is GMSD. Paper
  • There is MDSI. Paper

(The last two also exist somewhere in code form, not sure about the licenses, once I have dug that out, I’ll post it.)

BUT, I also tend to use compression artefacts and image dimensions as copy-‘protection’. Full quality only for paying customers. So there’s that.

EDIT: found something ‘better’, a pytorch implementation of several IQA metrics. Turns out, there are many many more. Github PyTorch Image Quality

EDIT1: holy moly, I just saw @jorismak s post…sorry to duplicate stuff :frowning_face:

1 Like

The challenge with compressing beforehand is that it would definitely be redone by social. Compressing something over and over again can destroy the quality of your image. If I were to post, I wouldn’t try to be clever; I would upload the highest quality image that I am comfortable sharing and let the overlords have at it.