From darktable to social media

And read the fine print: At least a number of years ago Instagram (& facebook?) had wording in their conditions of use where you granted them a royalty-free, transferable, world-wide license to use and sub-licence content you uploaded, with no financial counterpart (with some exceptions for content marked private)…

:rofl:

This results in sharper images on Facebook compared to JPG:
image

That’s most likely because FB handles the compression itself, directly from HQ images, instead of recompressing something that was already compressed before. But Instagram, for instance, doesn’t accept PNG files IIRC.

I believe this goes for more than photos. If you upload documents to Google drive I believe they have the right to scan them for content and use them…ie again they own your files even though they say you do… so really its not free space

" Your Content in our Services: When you upload or otherwise submit content to our Services, you give Google (and those we work with) a worldwide licence to use, host, store, reproduce, modify, create derivative works (such as those resulting from translations, adaptations or other changes that we make so that your content works better with our Services), communicate, publish, publicly perform, publicly display and distribute such content."

One big difference is that they don’t talk about transferring the license or sublicensing…
They need some permissions to publish your work on their services, after all.

Basically, Google wight have a hard time selling your content on request of a 3rd party, which the (then valid) Instagram license explicitely allowed…

I don’t want to get in a discussion about which of those conditions are “right” or “just”, but it is something to be aware of when uploading content (images, in the case of this forum)

As with many of these agreements they are sufficiently vague. And as you said you just need to be aware. It is funny on one hand they they respect your privacy and then on the other say if you share something its fair game as that is essentially the public domain….they may not use those words……this is part of their current terms….in Canada anyway…looks like there are country specific version of the documents….

On JPG compression: a trick I use is to find the maximum compression that give no_more than a certain difference from my source file.

For “diffference”, I use a threshold of 1% RMSE. With ImageMagick:

f:\web\im>%IMG7%magick toes.png -quality 80 x.jpg

f:\web\im>%IMG7%magick compare -metric RMSE toes.png x.jpg NULL:
1013.35 (0.0154627)

The RMSE is 0.015 on a scale of 0 to 1, so 1.5%, so too far from the source to be acceptable.

I do the work in a CMS program, but it is easily scripted.

Delta E, PSNR, RMSE, SSIM… are all imperfect metrics that all suck at predicting human vision at some point. Honestly, looking at 2 JPEG encoded at 80 or 95, I can’t tell the difference at 1:1 zoom for most pictures. If you need some forensic computational method to tell the difference, it’s probably because you don’t see it, so it’s fine with me.

Also, posting shitty pics on social media is an excellent excuse to keep selling HD prints. Good for business too. Again, seeing how and how much visual people consume and immediately forget, these days, I’m not too bothered with social media RMSE.

1 Like

I hacked together a simple bash script that runs a for loop based on imagemagik for each full size jpg in a temp folder.
Basically it runs (all values up to personal liking)

convert -resize 1000x1000 -define jpeg:extent=248KB -unsharp 0.4x0.6 infile outfile

The sweet thing is the -define jpeg:extent which causes convert to approach the demanded max file size in an iterative way by increasing the quality of the compression bit by bit. If the outfile size gets larger than the required max size, the last fitting value is taken.

Other things to play around with are -border and -bordercolor as some sites have these black backgrounds and I like to have the white frame around the picture (thank you for the color assessment mode in dt)

I guess if you would trust a measurement instead of your own eyes, i think butteraugli is the way to go. But still… Always use your eyes.

And start using something like mozjpeg if you care that much about jpeg efficiency.

I’m skeptical that you can upload something without it ever being recompressed. If it gets recompressed anyway, just uploading the best source that is not too grainy and not too sharp is probably your best bet.

Not a social media user here, let alone a poster. So what do I know.

btw: I think the hashtag darktable is quite active on Instagram, apparently it is possible to get views and favs with it.

Forget webp, I’m waiting for AVIF :slight_smile:

1 Like

My experience is that Facebook won’t downsize your pics as long as they’re at 2048px or less on the long edge, but they will definitely slaughter your image with their compression algorithms regardless. Even if you download your photo from FB and repost that same photo they will compress it once again, so resistance is futile.

My workaround is to send the picture to Flickr and post the link on Facebook. That way, if someone wants look at my pictures while taking a number 2, on a shiny screen full of finger grease and Escherichia coli, then they have that option.

And thanks for planting that thought in my brain @anon41087856

I follow it and tag my photos too. There are the occasional moody food shoots that are not related to our editor :sunglasses: A more recent #darktableedit tag is also active.

Yea, you are right, those food photographers are spamming the hashtag, especially the front page.

Apparently there are several restaurants named “Dark Table”…

Something to note, not only for Social Media but also your personal homepage or flickr: since jpeg does only specify the stream and decoder there is room for improvement in the encoder. One of the better ones is MozJPEG Github, short demo/features.
Another one (but with ginormous runtimes) is Googles Guetzli Github
And here is a comparison between Guetzli and amongst others MozJPEG: Guetzli vs. MozJEPG

If SSIM is not good enough of a quality metric for your taste, there are alternatives:

  • Guetzli uses an Image Quality Assessment (IQA) that is called Butteraugli and optimizes its output for a high IQA score. Butteraugli Github
  • And netflix’ VMAF (for video formats though but somewhat open source with a BSD2+patent). Github
  • There is GMSD. Paper
  • There is MDSI. Paper

(The last two also exist somewhere in code form, not sure about the licenses, once I have dug that out, I’ll post it.)

BUT, I also tend to use compression artefacts and image dimensions as copy-‘protection’. Full quality only for paying customers. So there’s that.

EDIT: found something ‘better’, a pytorch implementation of several IQA metrics. Turns out, there are many many more. Github PyTorch Image Quality

EDIT1: holy moly, I just saw @jorismak s post…sorry to duplicate stuff :frowning_face:

1 Like

The challenge with compressing beforehand is that it would definitely be redone by social. Compressing something over and over again can destroy the quality of your image. If I were to post, I wouldn’t try to be clever; I would upload the highest quality image that I am comfortable sharing and let the overlords have at it.

So far avif is slow to get traction but it’s coming (image formats always seem to come slower than video formats… People seem content with jpeg).

But it’s good at very low filesizes to get a passable quality. But it has trouble holding on to small detail and smaller grain and such.

I was trying and trying but I couldn’t get a file out of aomenc or rav1e that was the same size and contained the same (fine) detail as a simple heifenc command.

Meanwhile the latest Jpeg Xl encoder made a file that was smaller AND had fine detail,in less encoding time.And it supports 16bpp and has a lossless mode. I’m sold, now just for more program support :).

On the other hand, jpeg xr (Microsoft photo) also was clearly better than jpeg, and also won to most Jpeg2000 encoders in my tests… And it takes a fraction of cpu power to does its thing. And it supports lossless,16bpp and 16bit floating point…

It sounds perfect for photo cameras as a jpeg replacement (even using the base jpeg algorithms… Existing code and or hardware chips even had a chance to be reusable for jpegxr) but it never took off.
so I’ve kinda given up hope that we see much change for the technical good reasons… The only chance to get over jpeg is if the googles and the apples decide to put their weight behind something.

Avif is but a wild hope.

https://caniuse.com/avif

WebP took 10 years to get support, Apple finally started caring in the latest Safari released Sept. 2020, and we now talk about avif…

2 Likes