Why aren't we utilizing embedded previews more in Darktable?

It would be the same for both programs - embedded jpeg is the same size, no matter which program you use to extract ad view it.

And this was mentioned as big disadvantage by few guys in this topic. No possibility to show 1:1 (pixel by pixel) jpeg in DT, which would be fast and accurate method when evaluating/sorting photos. Processing full demosaiced raw for this purprose is too slow. And, what I mentioned with examples, DT deforms details in “fit to screen” jpeg previews, and when processing raw for 1:1 preview purproses - picture details (especially edges) are even more deformed. (see screenshots - the same bug for Panasonic RW2 and Canon CR2 files).

So - recommended view modes are not only unusable but also make good pictures looking like they were only worth deleting.

That’s the point. In case of wildlife serial shoots, selecting/sorting/evaluating requires fast rolling between sometimes 200-300 pictures of almost the same scene. Embedded jpegs (even from cameras that embed jpeg with 1/4 of raw resolution) are detailed enough to use with 1:1 preview (eg. to check how bird’s nails are clamping a branch) as well as “fit to screen” preview to decide which picture shows best position. If program utilizes embedded jpegs, selection takes minutes. Processing every raw with full demosaicing, every time user scrolls back/forth for 1:1 preview, takes hours for the same session.

Fault, “misclicked”, not intentionally. Normally I reply with quuted fragments if needed. Fortunately not big inconsistence this time.

I guess I’ll drop out of this discussion, you ignored most of my questions (or misunderstood them?)

Once again, if you ask dt to show a 100% preview(*), then you have to demosaic the raw. dt offers several options for that, but of course it’s slower than just slamming a jpg on the screen. If that is not what you want, stick to a zoom that is acceptable for your embedded jpgs. For my current camera, those are 1616×1080, vs 6000×4000 for full size raw, so 1 px in jpg represents ~16px in raw… Not ideal to see those fine details in the jpg…

And if you use a 4k screen, a jpg with 1080 on the short size must be scaled up to “show in full screen”. Of course that gives the artifacts you showed.

Both of these points you chose to ignore in your reply…

Bye

(*: as dt is a raw processor, it is not illogical that relative sizes are relative to the raw image size, i.e. in 100% view, 1 screen pixel == 1 raw pxel)

If you want to use the embedded JPEG, you can quickly cull your images using e.g. Geeqie.

1 Like

Absolutely not ignored (and thanks for your patience). Maybe misunderstood approach or not speak clear enough. Just looked for any method of showing embedded jpegs 1:1 which means “1 embedded jpeg pixel = 1 screen pixel”. Nothing more, nothing less and no matter what screen size/density. Inside DT, to have everything in 1 tool instead of culling in external software. All my remaining workflow (after selection) can be done with DT.

For Canons I use, Embedded jpeg size is 100% raw size. Of course you/I can “extract” more details from raw (especially with fur or feather) and that’s one of reasons we use raw, but jpeg is accurate enough to evaluate particular shot “delete or save”.
Even my Panasonic’s embedded 1920x1440jpeg (for 14Mpix sensor) is in most situations enough to evaluate a shot. If it was displayed pixel:pixel on screen.

First screenshot is embedded jpeg, scaled down, not up. When using Alt+W “sticky preview”.

Second screen is just that. Raw processed in culling mode and not scaled (“preview zoom 100%” - that it’s called in DT). Gives worst artifacts.
No matter which demosaicing method I choose for darkroom, I don’t get these artifacts, only time differs (and omitably small difference in details). 2-5 seconds to get full resolution not scaled preview in darkroom. In culling mode when I press shortcut for “preview zoom 100%” I get “scrapped” image as shown in second screenshot, in 40 seconds.

Third screen is Panasonic’s embedded jpeg, not scaled, shown by RT.
For picture evaluation/culling, embedded jpeg is worth more than artifacted preview produced in 40 seconds.

Thanks.
I use Rawtherapee for that purprose - it shows embedded jpegs shrunk to fit screen or not scaled 1:1 (jpeg pixel = screen pixel). Instantly. And it works. Quick cull, mark for delete. After culling whole session I delete marked pics with 1 operation. Raws that left in directory, I load to DT and process. Was searching to do the same in DT.

@Igor64: would you care to share the raw (or another exhibiting the problem), along with your darktablerc?

Of course. All raws make the same problem. RW2, CR2. Also tried NEF.

darktablerc.txt (34.4 KB)

Trying to upload 11MB (7zipped 18MB RW2) thru our satellite connection. Forums attachment failed, timeout. Trying by sharing platform. Not bad today, says 7 minutes left.

EDIT: link to RAW pCloud

@Igor64 @Pascal_Obry I was checking out ART yesterday and they have a really nice inspector window in their browse…I could imagine adding extract the embedded jpg from this type of view as an added icon to their tool bar…agriggio / ART / wiki / Quickstart — Bitbucket scroll down a few pages to see what I mean in the quickstart guide for art …there is a nice selection of several ways to view your preview…Would be a nice add to have something like this in DT…for example where ART lets you do a quick preview with a couple of types of curves ie linear vs std…DT could do a similar thing but let you compare say basecurve vs scene referred quick preview… I would be interested to hear what you think??

Will likely test all ART’s options, thanks.

Ok, trying to download ART.

Igor, I’ll try and make some measurements. I’m dead tired today, but I have not forgotten.

Version 1.61 id the most current…as RT and the fork ART also use the embedded JPG as a reference for their automatch tone curve I guess it plays a more central role in that software than in DT

While I did not find performance to be unbearable, it is inconvenient (W takes 2-3 seconds on my machine, but it’s a very old box).
The quality of the preview is absolutely horrible, useless.
Already processed image with Alt+W sticky preview:


Zoomed in:

Same image in darkroom:

In darkroom, zoomed in:

JPEG preview displayed in Geeqie:

JPEG preview in Geequie, zoomed in:

I then disabled the ‘prefer performance over quality’ setting, and there was a change, but it’s only a blurry image instead of a pixelated one (a different shot, to better show image quality):
Sticky preview at 100%


Darkroom:

JPEG preview in Geeqie, zoomed in:

@Pascal_Obry: this has just occurred to me: does the relevance of the ‘prefer performance over quality’ flag indicate that the culling view (even at 100%) is generated from the small preview image (the one normally displayed in the darkroom top-left)?

No, to me the full preview is not impacted by the ‘prefer performance over quality’ flag.

Very interesting. Maybe we’re trying different things? Using different settings? The environment could cause some difference?
This is what I did:

My screen resolution is 1920x1080. darktable is at 3.3.0+1931~g4d0071658. OS is Kubuntu 20.10.
I’ve tried toggling OpenCL, did not have an effect.
In my darktablerc (done after restoring prefer performance over quality to true):

kofa@eagle:~/.config/darktable$ grep -i qual darktablerc 
database_cache_quality=89
plugins/darkroom/demosaic/quality=at most PPG (reasonable)
plugins/darkroom/equalizer/expanded=FALSE
plugins/darkroom/equalizer/favorite=FALSE
plugins/darkroom/equalizer/modulegroup=4
plugins/darkroom/equalizer/visible=FALSE
plugins/darkroom/toneequal/expanded=FALSE
plugins/darkroom/toneequal/favorite=TRUE
plugins/darkroom/toneequal/gui_page=0
plugins/darkroom/toneequal/modulegroup=1
plugins/darkroom/toneequal/visible=TRUE
plugins/imageio/format/j2k/quality=
plugins/imageio/format/jpeg/quality=95
plugins/imageio/format/webp/quality=
plugins/lighttable/export/high_quality_processing=TRUE
plugins/lighttable/low_quality_thumbnails=TRUE
plugins/slideshow/high_quality=TRUE
kofa@eagle:~/.config/darktable$ grep -i perf darktablerc 
performance_configuration_version_completed=1
ui/performance=TRUE

@Pascal_Obry, @Igor64 : preview quality depends on darktablerc plugins/lighttable/low_quality_thumbnails=TRUE - see Full-size preview unusable to evaluate sharpness · Issue #7158 · darktable-org/darktable · GitHub

@Igor64 It turns out there’s another issue: even if you enable the use of embedded JPGs, they are not used. @AlicVB (maybe while investigating the issue above) has opened embedded jpeg not used for zommed preview · Issue #7162 · darktable-org/darktable · GitHub to track this (thanks!).

Thanks for deep invesigation, kofa.

Finally had time to use/try ART with more patience. I have instantly fell in love with ART’s way of presenting RAW previews. ART makes it ultra fast. <2s on my intel i5-8, compared to DT’s 40s-1min of full loaded processor work. And “Black&white RAW with highlights” - what a genius and fast mode! :smiling_face_with_three_hearts: Even faster than color ones. User can focus on details, luma and hihglights.
And ART’s jpeg preview derived from RT - instant and not adding artifacts.
The only way ART is lacking to absolutely perfect preview, is RT’s “on mouse hover” inspect pointing behavior.

@priort Thanks for presenting ART to me. :star_struck: Pushed me to reinvent whole workflow, but effects are worth it.
What is ART best of all RAW processors I tried:

  • culling/previews
  • demosaicing methods. Especially ultra fast and detaly IGV one - now my bird shots got their real feather back
  • “texture boost” module - good as an input for further processing in DT’s contrast equalizer
  • really working “highlight reconstruction”. DT’s one looks like nonworking compared to RT’s/ART’s, no matter Lch or color reconstruct -(A)RT’s one the only that can reconstruct penguins’ blinking white chests taken in full sun.

I now extract full-res 16-bit TIFFs form ART and load them into RT for further processing:

  • color fiddling (now I experiment processing non-touched ART’s tonal output with DT’s filmic)
  • operations with masks
  • gradients (lightyears better than RT’s/ART’s GND filter)
  • framing
  • watermarking
  • DT’s genius export profiles/options that I can set for web/competition/publication/print separately (only something like “sharpening after scaling down” is lacking in DT)

Glad you found some useful information out of that. I too am re-evaluating and now that Art and RT have masking and a few other features I am looking to incorporate this perhaps more that DT. I have found RT to be very powerful but man the options are overwhelming at least for a new user. Now I have more experience they are a bit less intimidating but there is still a ton of settings that I can’t say I have mastered. ART offers a nicely condensed and modified experience and the tone mapping tools may be as good a filmic but I have to work with them more. The color toning module global and local edits is powerful in RT…you can achieve a lot just in that one module. I do have a lot invested in DT with presets and the like but the color management is also more organized IMO in RT with support for DCP and LCP files from Adobe in addition to ICC and internal profiles so I get how it all plays out. Also the auto-tone mapped tone curve based on the embedded jpg is a logical approach and offers what many people are looking for as a starting point. I haven’t checked but this might be available in ART as well…just not documented…Favorites Tab - RawPedia.

DT’s genius export profiles/options that I can set for web/competition/publication/print separately (only something like “sharpening after scaling down” is lacking in DT)

darktable has lua-scripts that do sharpening after export

1 Like

@Igor64: Good news, see use embedded preview even if only auto history applied by AlicVB · Pull Request #7178 · darktable-org/darktable · GitHub