From darktable to social media

To show social cares even less about you than Aurélien says, they change the image requirements and policies on the fly.

Online you often see articles to the effect of “Facebook 2021 Photo Uploading Specs”. Don’t trust the specifics because they would have already changed. (That said, they can be helpful because they do get into a bit of the nitty-gritty on the various circumstances that you should be aware of; such as where and how you are uploading to the platform, for what purpose. I.e. the specs for a header image is different on a Page, Event and Feed. Sometimes you have to upload 1 image that caters to multiple use cases!) Don’t trust Facebook documentation either. It may not be accurate at the time of access, and usually too vague to begin with. They will crop, destroy your aspect ratio, clip and distort your videos any way they wish at whatever time they wish.

The best way to combat this is to try everything and see what sticks at the moment. (The advice above to upload a test image to see what happens to your EXIF is good. Not just check your EXIF but also view your image under different conditions and systems to evaluate how your image is affected.) Yes, put your OCD googles on, time down the drain and axe to break the glass by trying every single method and see how the responsive design works on all browsers, OSes and systems and orientations with various add-ons and edge cases. Or just don’t care. Be vain and bare your soul to the internets.

Bonus

A couple of years ago, I read on the news that a woman’s colleague found a “sexy” advert of her by word of mouth. The thing was that she never uploaded such a photo and was happily married with no second thoughts. So where did this image come from? Why was it edited to be “sexy”? Why did it have personal info? Short answer: because FB sells your info and likeness without reserve. You are their product.

FB was just an example. I have no beef with them. But seriously, know what you are getting into: stay safe digitally. :male_detective:


Sorry for the multiple edits.

And read the fine print: At least a number of years ago Instagram (& facebook?) had wording in their conditions of use where you granted them a royalty-free, transferable, world-wide license to use and sub-licence content you uploaded, with no financial counterpart (with some exceptions for content marked private)…

:rofl:

This results in sharper images on Facebook compared to JPG:
image

That’s most likely because FB handles the compression itself, directly from HQ images, instead of recompressing something that was already compressed before. But Instagram, for instance, doesn’t accept PNG files IIRC.

I believe this goes for more than photos. If you upload documents to Google drive I believe they have the right to scan them for content and use them…ie again they own your files even though they say you do… so really its not free space

" Your Content in our Services: When you upload or otherwise submit content to our Services, you give Google (and those we work with) a worldwide licence to use, host, store, reproduce, modify, create derivative works (such as those resulting from translations, adaptations or other changes that we make so that your content works better with our Services), communicate, publish, publicly perform, publicly display and distribute such content."

One big difference is that they don’t talk about transferring the license or sublicensing…
They need some permissions to publish your work on their services, after all.

Basically, Google wight have a hard time selling your content on request of a 3rd party, which the (then valid) Instagram license explicitely allowed…

I don’t want to get in a discussion about which of those conditions are “right” or “just”, but it is something to be aware of when uploading content (images, in the case of this forum)

As with many of these agreements they are sufficiently vague. And as you said you just need to be aware. It is funny on one hand they they respect your privacy and then on the other say if you share something its fair game as that is essentially the public domain….they may not use those words……this is part of their current terms….in Canada anyway…looks like there are country specific version of the documents….

On JPG compression: a trick I use is to find the maximum compression that give no_more than a certain difference from my source file.

For “diffference”, I use a threshold of 1% RMSE. With ImageMagick:

f:\web\im>%IMG7%magick toes.png -quality 80 x.jpg

f:\web\im>%IMG7%magick compare -metric RMSE toes.png x.jpg NULL:
1013.35 (0.0154627)

The RMSE is 0.015 on a scale of 0 to 1, so 1.5%, so too far from the source to be acceptable.

I do the work in a CMS program, but it is easily scripted.

Delta E, PSNR, RMSE, SSIM… are all imperfect metrics that all suck at predicting human vision at some point. Honestly, looking at 2 JPEG encoded at 80 or 95, I can’t tell the difference at 1:1 zoom for most pictures. If you need some forensic computational method to tell the difference, it’s probably because you don’t see it, so it’s fine with me.

Also, posting shitty pics on social media is an excellent excuse to keep selling HD prints. Good for business too. Again, seeing how and how much visual people consume and immediately forget, these days, I’m not too bothered with social media RMSE.

1 Like

I hacked together a simple bash script that runs a for loop based on imagemagik for each full size jpg in a temp folder.
Basically it runs (all values up to personal liking)

convert -resize 1000x1000 -define jpeg:extent=248KB -unsharp 0.4x0.6 infile outfile

The sweet thing is the -define jpeg:extent which causes convert to approach the demanded max file size in an iterative way by increasing the quality of the compression bit by bit. If the outfile size gets larger than the required max size, the last fitting value is taken.

Other things to play around with are -border and -bordercolor as some sites have these black backgrounds and I like to have the white frame around the picture (thank you for the color assessment mode in dt)

I guess if you would trust a measurement instead of your own eyes, i think butteraugli is the way to go. But still… Always use your eyes.

And start using something like mozjpeg if you care that much about jpeg efficiency.

I’m skeptical that you can upload something without it ever being recompressed. If it gets recompressed anyway, just uploading the best source that is not too grainy and not too sharp is probably your best bet.

Not a social media user here, let alone a poster. So what do I know.

btw: I think the hashtag darktable is quite active on Instagram, apparently it is possible to get views and favs with it.

Forget webp, I’m waiting for AVIF :slight_smile:

1 Like

My experience is that Facebook won’t downsize your pics as long as they’re at 2048px or less on the long edge, but they will definitely slaughter your image with their compression algorithms regardless. Even if you download your photo from FB and repost that same photo they will compress it once again, so resistance is futile.

My workaround is to send the picture to Flickr and post the link on Facebook. That way, if someone wants look at my pictures while taking a number 2, on a shiny screen full of finger grease and Escherichia coli, then they have that option.

And thanks for planting that thought in my brain @anon41087856

I follow it and tag my photos too. There are the occasional moody food shoots that are not related to our editor :sunglasses: A more recent #darktableedit tag is also active.

Yea, you are right, those food photographers are spamming the hashtag, especially the front page.

Apparently there are several restaurants named “Dark Table”…

Something to note, not only for Social Media but also your personal homepage or flickr: since jpeg does only specify the stream and decoder there is room for improvement in the encoder. One of the better ones is MozJPEG Github, short demo/features.
Another one (but with ginormous runtimes) is Googles Guetzli Github
And here is a comparison between Guetzli and amongst others MozJPEG: Guetzli vs. MozJEPG

If SSIM is not good enough of a quality metric for your taste, there are alternatives:

  • Guetzli uses an Image Quality Assessment (IQA) that is called Butteraugli and optimizes its output for a high IQA score. Butteraugli Github
  • And netflix’ VMAF (for video formats though but somewhat open source with a BSD2+patent). Github
  • There is GMSD. Paper
  • There is MDSI. Paper

(The last two also exist somewhere in code form, not sure about the licenses, once I have dug that out, I’ll post it.)

BUT, I also tend to use compression artefacts and image dimensions as copy-‘protection’. Full quality only for paying customers. So there’s that.

EDIT: found something ‘better’, a pytorch implementation of several IQA metrics. Turns out, there are many many more. Github PyTorch Image Quality

EDIT1: holy moly, I just saw @jorismak s post…sorry to duplicate stuff :frowning_face:

1 Like

The challenge with compressing beforehand is that it would definitely be redone by social. Compressing something over and over again can destroy the quality of your image. If I were to post, I wouldn’t try to be clever; I would upload the highest quality image that I am comfortable sharing and let the overlords have at it.

So far avif is slow to get traction but it’s coming (image formats always seem to come slower than video formats… People seem content with jpeg).

But it’s good at very low filesizes to get a passable quality. But it has trouble holding on to small detail and smaller grain and such.

I was trying and trying but I couldn’t get a file out of aomenc or rav1e that was the same size and contained the same (fine) detail as a simple heifenc command.

Meanwhile the latest Jpeg Xl encoder made a file that was smaller AND had fine detail,in less encoding time.And it supports 16bpp and has a lossless mode. I’m sold, now just for more program support :).

On the other hand, jpeg xr (Microsoft photo) also was clearly better than jpeg, and also won to most Jpeg2000 encoders in my tests… And it takes a fraction of cpu power to does its thing. And it supports lossless,16bpp and 16bit floating point…

It sounds perfect for photo cameras as a jpeg replacement (even using the base jpeg algorithms… Existing code and or hardware chips even had a chance to be reusable for jpegxr) but it never took off.
so I’ve kinda given up hope that we see much change for the technical good reasons… The only chance to get over jpeg is if the googles and the apples decide to put their weight behind something.