Does anyone use the preview?

After several weeks of mucking about with @bastibe trying to improve response speed by reducing the resolution of the preview image, I wonder who actually uses it?

The preview is the smallish image found at the top left of the dt screen in Darkroom view. Bastian noticed that it was taking nearly 50% of the computational time, and set out to reduce that, which he did. There remain some bugs, maybe, but basically it’s done and the response time is almost halved.

Thing is, while de-bugging, I realised I never look at that image. Which means it might be more interesting to just get rid of it, or have the option to turn it off. That would also leave more space for other things, like the different visualisation tools some people like as alternatives to the histogram.

So who uses it, and for what?

Yes, I use it regularly.

When the main image is zoomed-in you are not always able to navigate the area you want to look at from within the main image itself (depends on the module you are using). I often use the square in the preview to navigate to the part I want to look at.

Please don’t remove it…

2 Likes

Also note that the preview is not just rendered in the top left corner, but also while zooming or panning, before the main view is filled in. Without the preview, zooming or panning shows an empty frame, which is filled in only after a short pause.

An option to turn it off might be ok, but removing it totally would be a harsh blow for me :wink:.

I’m frequently working on digitized diapositives and film negatives. Digitizing is done with a DSLR, format of the digitized images about 6000 x 4000 pixels. In the majority of cases this kind of images need some retouching (dust, scratches…). This retouching is done at magnification 100%. That means that the image area only shows a very small part of the whole image (heavyly zoomed in). It’s very convenient then to have the bright square in the preview to navigate between different areas of the image.

1 Like

Ok, got it :slight_smile:

I shall no longer propose killing the preview!

3 Likes

…it’s also used to fill in context if the full res pipeline doesn’t cover all of it. this is sometimes used to generate extended boundary handling or to compute global histograms/average brightness etc.

in my prototypical next-gen pipeline implementation i’m always processing the complete image in full, which turns out to be faster for typical dslr resolutions. this is especially true for 4k displays. dt works on the assumption that there are a lot more pixels in the image than on screen, that’s not really the case any more.

This is something I hadn’t thought about: at present the main image is only processed for the part that is seen on screen?
That clarifies the role of the preview… which could maybe be better labelled as the “context” view.
And so in your changes, the full image would be brought forward, leaving both the preview and and main image to be pulled out of it as required? That sounds like an excellent idea.

When do you think it will be done?

Now I’m glad I asked such a stupid question.

A proverb says that there are no such things as stupid questions but only stupid answers… :wink:

2 Likes

I can do those too! :smiley:

1 Like

uhm i’m not really on a schedule. you can test drive some early version here.

i have 120+ local commits on top of that for various reasons. i still need to clean up and push. so be warned it’s very rough around the edges indeed (to an extent that you will ask yourself where are the edges).

development takes time because i want to be sure we have all features in the core pipeline infrastructure that we would ever want. in particular:

  • non-linear/DAG processing (multi input, multi output)
  • naturally support reordering/multi-instance (node editor)
  • full res/no context/preview
  • full GPU/data never leaves the device until written to disk
  • use texture units to abstract away pixel storage formats (f16 f32 ui8 etc)
  • animation/feedback loop support
  • infrastructure for painting/drawing masks by rasterising on GPU
  • human readable and binary history stacks
  • fastest speed we can get

some of that has been introduced into the traditional dt pipeline in recent years, but the original design was built on quite different assumptions. this means it becomes harder to maintain and extend, and also slow in the process. so i was trying to prove a point whether a rewrite could improve matters much. i think i’m convinced now, but the way forward is still long.

3 Likes

That would have to wait until I have a functioning GPU, then…

The preview is an invaluable tool for me to picture the composition and overall feel of the composition and image. I go back and forth looking at the preview and the image and it helps me judge if I did go to far with an edit or that the crop is appropriate.

Preview is also used to manage the coordinates of drawn masks when interacting with them. In tone equalizer, it is used to grab the luminance value of the image buffer at the cursor position, and match it to settings values.

I’m a bit concerned about reducing its resolution. Its already quite small.

3 Likes

Yes, it would be selectable: either a float (currently) or maybe an enum in core options, choose 0.25, 0.50, 1.00.
I’ve been using it for a month at 0.25 and don’t notice, but then that might be more about me.

Le lun. 16 mars 2020 à 13:18, Aurélien Pierre via discuss.pixls.us noreply@discuss.pixls.us a écrit :

In fact, if I’ve understood correctly (not obvious), the current resolution is way higher than can be used by the small size of the preview image.
So a linear down sample by a factor of 4 would leave it with about the same pixels/mm as the main image at full scale. That would suggest that using a colour picker would be fine: the visual scale would be coherent with the numerical scale behind it.

Le lun. 16 mars 2020 à 13:31, Graham Byrnes grahamb29@gmail.com a écrit :

Yes, because that thumbnail doesn’t matter much. There are other more critical uses of the preview, such as color-pickers, histograms and masks.

It’s not just a matter of scale. Subsampling an image has an averaging/blurring effect. Any time you use a color-picker to choose the min or max pixel value in an image, it’s done on the preview. The smaller the preview, the more you dilute those min/max pixel in their neighbours, and the more wrong the measure gets.

Then, yes it’s a matter of scale if you are drawing masks that closely follow an edge and expect some precision at full resolution.

Ah yes… ok for means, but biased towards the mean for min or max.

For drawing though, you are still normally free to use the main image?

Le lun. 16 mars 2020 à 13:52, Aurélien Pierre via discuss.pixls.us noreply@discuss.pixls.us a écrit :

I was already wondering why there were no commits since the beginning of February

yeah sorry… made a mess by trying various things that aren’t entirely necessary for raw photography. will clean up and push at some point soon…

Anyone wanting to try the variable downsampling option, it’s here:
https://github.com/GrahamByrnes/darktable/tree/PrevDownsample2
Default value is 1, so nothing should change: go into core options and reset to 0.25 to try.
Note that it’s built on a very recent darktable master, so it will ask to upgrade your library to v25: if you’re not already there, you should back-up.
(The downsampling changes are about 12 lines of code and don’t do anything so drastic).