I want to migrate my photo repository from jpg to webp, and, obviously, I won’t convert the jpg photos to webp, because I’d surely get a worst image than before: I’m going to re-export my albums from raw to webp.
My questions are:
I was saving jpg at quality 90, what webp quality should I set in order to get an image of at least the same visual quality?
I need to make a bash script which loads each raw image from a folder and exports it with a given export preset. Is it possible? How?
I did some experiments using webp on my blog and I dropped for everything except pngs (using lossless mode), since the chroma subsampling destroyed the details in my photos. I wouldn’t go that way unless you are too pressed to lower your bandwidth.
I had better results with AVIF, but it is too heavy on CPU right now
The problem is that the “google gods” penalize websites for using jpeg, especially when crawled by the mobile spider. The “google gods” recommends using webp.
For me, I export jpegs 100% no resize, which just some diffuse/sharpen. Then I run a small script that resize/sharpen and finally convert to webp.
infile: name of the image :: abc.jpg
size: file size in px :: 725
and
I compare the sharpenings to select the one that I want, usually a radius of 0.8 pixel or 1.0 pixel for images 800 px wide, and delete the others, then finally i convert to webp.
I did try with tiff, I couldn’t see any quality advantage at sizes less than 1500/2000px. There was a visible difference at the 3000px mark, but then why use webp? Google will penalize the webpage for too large of an image and too slow of loading the web page.
Imagemagick shrinks the file by around 25%, then the webp conversion shrinks it again. I usually get a 725px image down to 60/70KB.
Quality is still “decent”, aka viewable, without many distracting artifacts. They are there but you have to pay attention to view them (which most people don’t).
My primary goal is to earn disk space without loosing visual quality.
My jpgs are generated from raw with darktable with quality=90.
In the original question I was wondering if generating again the webp images from the raw with darktable could give me smaller images with a better visual quality. And what webp quality should I use.
you could use https://squoosh.app/ to get a rough idea of the webp quality to aim for and then test on 4-5 representative pics in darktable. this assumes that darktable uses the same webp algorithm as google´s squoosh. again i would feed it (squoosh) a lossless format to test
i also wonder whether webp (a format best suited to web display where speed is a primary objective) is the best archival format - testing some pansies i get some strange colours shifts/lightening of shadowed areas even at . perhaps pushing the jpeg compression to say 85 or 87 might be a reasonable saving in disk space. one other option is to simply carry on using the same jpeg compression but buy more disk space?
Darktable´s webp comes in at around 1/3 smaller @90 quality compared to 90 quality jpeg (tested on one image ) and didn´t have any strange shifts/lightening
Quality level depends on image content, the app and its export settings. To gauge a desirable level, assemble and test a diverse set of images representative of your photography. Then choose a level that covers the most ground and leave some leeway to account for new conditions.
If you do landscape photography, gather a large set of landscape images featuring different times of day, weather conditions, biomes, camera settings and various post-processing interpretations. Include dark, bright, sharp, soft, high/low dynamic range, technical, artistic, etc., frames. You could use other people’s photos sets as long as they represent the types of photos you want to convert.
Edit: I merged the previous post with this one to make my commentary more concise.
I know there are issues with copyright but on a mac with fswatch and a converterscript I export 16bit png which is auto converted to 50% quality heic. My Sony A6300 24 megapixel files are on an average 500kbs with exellent quality and the conversion is extremely fast.
I forgot to mention dt has dithering if you want to use a creative method to reduce file size. Dithering uses less colours and reduces the impact of artifacts.