From darktable to social media

Delta E, PSNR, RMSE, SSIM… are all imperfect metrics that all suck at predicting human vision at some point. Honestly, looking at 2 JPEG encoded at 80 or 95, I can’t tell the difference at 1:1 zoom for most pictures. If you need some forensic computational method to tell the difference, it’s probably because you don’t see it, so it’s fine with me.

Also, posting shitty pics on social media is an excellent excuse to keep selling HD prints. Good for business too. Again, seeing how and how much visual people consume and immediately forget, these days, I’m not too bothered with social media RMSE.

1 Like

I hacked together a simple bash script that runs a for loop based on imagemagik for each full size jpg in a temp folder.
Basically it runs (all values up to personal liking)

convert -resize 1000x1000 -define jpeg:extent=248KB -unsharp 0.4x0.6 infile outfile

The sweet thing is the -define jpeg:extent which causes convert to approach the demanded max file size in an iterative way by increasing the quality of the compression bit by bit. If the outfile size gets larger than the required max size, the last fitting value is taken.

Other things to play around with are -border and -bordercolor as some sites have these black backgrounds and I like to have the white frame around the picture (thank you for the color assessment mode in dt)

I guess if you would trust a measurement instead of your own eyes, i think butteraugli is the way to go. But still… Always use your eyes.

And start using something like mozjpeg if you care that much about jpeg efficiency.

I’m skeptical that you can upload something without it ever being recompressed. If it gets recompressed anyway, just uploading the best source that is not too grainy and not too sharp is probably your best bet.

Not a social media user here, let alone a poster. So what do I know.

btw: I think the hashtag darktable is quite active on Instagram, apparently it is possible to get views and favs with it.

Forget webp, I’m waiting for AVIF :slight_smile:

1 Like

My experience is that Facebook won’t downsize your pics as long as they’re at 2048px or less on the long edge, but they will definitely slaughter your image with their compression algorithms regardless. Even if you download your photo from FB and repost that same photo they will compress it once again, so resistance is futile.

My workaround is to send the picture to Flickr and post the link on Facebook. That way, if someone wants look at my pictures while taking a number 2, on a shiny screen full of finger grease and Escherichia coli, then they have that option.

And thanks for planting that thought in my brain @anon41087856

I follow it and tag my photos too. There are the occasional moody food shoots that are not related to our editor :sunglasses: A more recent #darktableedit tag is also active.

Yea, you are right, those food photographers are spamming the hashtag, especially the front page.

Apparently there are several restaurants named “Dark Table”…

Something to note, not only for Social Media but also your personal homepage or flickr: since jpeg does only specify the stream and decoder there is room for improvement in the encoder. One of the better ones is MozJPEG Github, short demo/features.
Another one (but with ginormous runtimes) is Googles Guetzli Github
And here is a comparison between Guetzli and amongst others MozJPEG: Guetzli vs. MozJEPG

If SSIM is not good enough of a quality metric for your taste, there are alternatives:

  • Guetzli uses an Image Quality Assessment (IQA) that is called Butteraugli and optimizes its output for a high IQA score. Butteraugli Github
  • And netflix’ VMAF (for video formats though but somewhat open source with a BSD2+patent). Github
  • There is GMSD. Paper
  • There is MDSI. Paper

(The last two also exist somewhere in code form, not sure about the licenses, once I have dug that out, I’ll post it.)

BUT, I also tend to use compression artefacts and image dimensions as copy-‘protection’. Full quality only for paying customers. So there’s that.

EDIT: found something ‘better’, a pytorch implementation of several IQA metrics. Turns out, there are many many more. Github PyTorch Image Quality

EDIT1: holy moly, I just saw @jorismak s post…sorry to duplicate stuff :frowning_face:

1 Like

The challenge with compressing beforehand is that it would definitely be redone by social. Compressing something over and over again can destroy the quality of your image. If I were to post, I wouldn’t try to be clever; I would upload the highest quality image that I am comfortable sharing and let the overlords have at it.

So far avif is slow to get traction but it’s coming (image formats always seem to come slower than video formats… People seem content with jpeg).

But it’s good at very low filesizes to get a passable quality. But it has trouble holding on to small detail and smaller grain and such.

I was trying and trying but I couldn’t get a file out of aomenc or rav1e that was the same size and contained the same (fine) detail as a simple heifenc command.

Meanwhile the latest Jpeg Xl encoder made a file that was smaller AND had fine detail,in less encoding time.And it supports 16bpp and has a lossless mode. I’m sold, now just for more program support :).

On the other hand, jpeg xr (Microsoft photo) also was clearly better than jpeg, and also won to most Jpeg2000 encoders in my tests… And it takes a fraction of cpu power to does its thing. And it supports lossless,16bpp and 16bit floating point…

It sounds perfect for photo cameras as a jpeg replacement (even using the base jpeg algorithms… Existing code and or hardware chips even had a chance to be reusable for jpegxr) but it never took off.
so I’ve kinda given up hope that we see much change for the technical good reasons… The only chance to get over jpeg is if the googles and the apples decide to put their weight behind something.

Avif is but a wild hope.

https://caniuse.com/avif

WebP took 10 years to get support, Apple finally started caring in the latest Safari released Sept. 2020, and we now talk about avif…

2 Likes

Don’t get us started on image file formats again. Plenty of old threads to weather your COVID boredom. (I don’t understand why people can claim boredom. These times are anything but boring! And I have been busier than ever!)

Is it possible that Instagram does not like the sRGB profile that is embedded in photos that were processed with darktable? I noticed that Instagram often changes the colors if I upload photos that were processed with darktable but not if I upload photos that were processed with RawTherapee.
This is a photo on Instagram:

This is the same photo:


Apparently Instagram gave it a blue cast.
I must stress that every other properly color managed program shows correct colors.

I use the darktable tags not to get views of favs but to get word out there that I use darktable and for others who use it to be able to compare notes.

A browser issue perhaps. I don’t see a difference here.

It’s clearly visible for me, even on your screenshot. The blue of the sky is a different color, the correct (right, not Instagram) is more cyan. I checked in Firefox and Chrome, and on two different screens.

Sceenshot of instagram photo displayed in Firefox and downloaded jpg displayed in color managed viewer

It seems difficult to see a difference

This!

Much smaller differences in this, but still, the forest on the right hill a tad brighter and more yellow, the sky with more cyan.

1 Like

You could try swapping output profiles to see if it makes any difference.