An option to turn it off might be ok, but removing it totally would be a harsh blow for me .
Iâm frequently working on digitized diapositives and film negatives. Digitizing is done with a DSLR, format of the digitized images about 6000 x 4000 pixels. In the majority of cases this kind of images need some retouching (dust, scratchesâŠ). This retouching is done at magnification 100%. That means that the image area only shows a very small part of the whole image (heavyly zoomed in). Itâs very convenient then to have the bright square in the preview to navigate between different areas of the image.
âŠitâs also used to fill in context if the full res pipeline doesnât cover all of it. this is sometimes used to generate extended boundary handling or to compute global histograms/average brightness etc.
in my prototypical next-gen pipeline implementation iâm always processing the complete image in full, which turns out to be faster for typical dslr resolutions. this is especially true for 4k displays. dt works on the assumption that there are a lot more pixels in the image than on screen, thatâs not really the case any more.
This is something I hadnât thought about: at present the main image is only processed for the part that is seen on screen?
That clarifies the role of the preview⊠which could maybe be better labelled as the âcontextâ view.
And so in your changes, the full image would be brought forward, leaving both the preview and and main image to be pulled out of it as required? That sounds like an excellent idea.
uhm iâm not really on a schedule. you can test drive some early version here.
i have 120+ local commits on top of that for various reasons. i still need to clean up and push. so be warned itâs very rough around the edges indeed (to an extent that you will ask yourself where are the edges).
development takes time because i want to be sure we have all features in the core pipeline infrastructure that we would ever want. in particular:
non-linear/DAG processing (multi input, multi output)
naturally support reordering/multi-instance (node editor)
full res/no context/preview
full GPU/data never leaves the device until written to disk
use texture units to abstract away pixel storage formats (f16 f32 ui8 etc)
animation/feedback loop support
infrastructure for painting/drawing masks by rasterising on GPU
human readable and binary history stacks
fastest speed we can get
some of that has been introduced into the traditional dt pipeline in recent years, but the original design was built on quite different assumptions. this means it becomes harder to maintain and extend, and also slow in the process. so i was trying to prove a point whether a rewrite could improve matters much. i think iâm convinced now, but the way forward is still long.
The preview is an invaluable tool for me to picture the composition and overall feel of the composition and image. I go back and forth looking at the preview and the image and it helps me judge if I did go to far with an edit or that the crop is appropriate.
Preview is also used to manage the coordinates of drawn masks when interacting with them. In tone equalizer, it is used to grab the luminance value of the image buffer at the cursor position, and match it to settings values.
Iâm a bit concerned about reducing its resolution. Its already quite small.
Yes, it would be selectable: either a float (currently) or maybe an enum in core options, choose 0.25, 0.50, 1.00.
Iâve been using it for a month at 0.25 and donât notice, but then that might be more about me.
In fact, if Iâve understood correctly (not obvious), the current resolution is way higher than can be used by the small size of the preview image.
So a linear down sample by a factor of 4 would leave it with about the same pixels/mm as the main image at full scale. That would suggest that using a colour picker would be fine: the visual scale would be coherent with the numerical scale behind it.
Yes, because that thumbnail doesnât matter much. There are other more critical uses of the preview, such as color-pickers, histograms and masks.
Itâs not just a matter of scale. Subsampling an image has an averaging/blurring effect. Any time you use a color-picker to choose the min or max pixel value in an image, itâs done on the preview. The smaller the preview, the more you dilute those min/max pixel in their neighbours, and the more wrong the measure gets.
Then, yes itâs a matter of scale if you are drawing masks that closely follow an edge and expect some precision at full resolution.
Anyone wanting to try the variable downsampling option, itâs here: https://github.com/GrahamByrnes/darktable/tree/PrevDownsample2
Default value is 1, so nothing should change: go into core options and reset to 0.25 to try.
Note that itâs built on a very recent darktable master, so it will ask to upgrade your library to v25: if youâre not already there, you should back-up.
(The downsampling changes are about 12 lines of code and donât do anything so drastic).
I do too use preview and like it when I find it useful. I donât use it constantly, so for me it COULD be made invisible.
Also in add RGB parade and vectorscope as histogram alternatives · Issue #4149 · darktable-org/darktable · GitHub I mentioned that in some cases Iâd love to be able to have histogram on the leftside of the window where preview is instead of previewâŠ
I could also be totally fine with detaching preview and itâs zoom navigation functionality to preview window on 2nd screen⊠and have histogram(s) as well on 2nd screen