16 inch laptop for photo editing

[What follows is incorrect: I was thinking of the “reduce preview resolution” option. What kofa meant refers to the lighttable thumbnails. I think.]

That’s for the preview render, i.e. the thumbnail in the corner, but also the color pickers etc. It’s actually a feature I was involved with. Funny thing, I started investigating this based on a false premise: I had noticed that the preview took a long time, so I spent some effort speeding it up; then I realized it was only slow because I had misconfigured darktable to run both preview and main render on the CPU, which slowed them both down. In the end I didn’t need the feature any more, and someone else finished it. I suppose it’s still useful on CPU-only computers.

What I’m proposing here is a similar feature, but for the main render. A “retina” screen, by definition, has more resolution than our eyes can resolve. Therefore, we could render at a lower resolution without losing visible detail. With a bit of luck, we could even reuse the logic of the 4x preview feature, or the new HQ render button.

1 Like

Thanks for the clarification. It can be confusing at times, with the editor also being referred to as preview.
But isn’t that another option?

reduce resolution of preview image
Reduce the resolution of the navigation preview image (choose from “original”, “1/2”, “1/3” or “1/4” size). This may improve the speed of the rendering but take care as it can also hinder accurate color picking and masking (default “original”).

The names are somewhat confusing for me. darktable 4.9 user manual - multiple devices refers to the main editor view’s pipeline as the full pipeline, but the 2nd display’s pipeline is called ‘preview’, and there’s a separate thumbnail pipeline defined (is that for the lighttable?):

a,b,c... defines the devices that are allowed to process the center image (full) pixelpipe. Likewise devices d,e,f... can process the preview pixelpipe, devices g,h,i... the export pixelpipes, devices j,k,l... the thumbnail pixelpipes and finally devices m,n,o... preview pixelpipe for the second window.

1 Like

Oh hot darn, I think I got it wrong, too! Thank you for pointing that out.

So, we have an option for reducing the resolution of thumbnails and the preview, and a high quality option for the center view. I’d like another option for a reduced-resolution/faster center view.

3 Likes

Ist this already available in dt 4.8.x?

Ah, I know what the probalem is. Apparently the big center preview is processed by the cpu, that’s why the preview @ 100 % view is so slow but not the actual export. Because the export is actually quite fast.

So what settings do I have to choosefor opencl device priority?

1 Like

I’d try “very fast GPU”, then everything should be processed by the GPU.

I think I tried that and there was no difference.

The strange thing with this machine is: the actual export is faster than on my desktop, but the preview@100% (moving around) is so slow. Even 200% is faster.

Now I activated very fast gpu + darktable resources large + use all device memory and performance seems to be better.

3 Likes

Ideally, you’d want your NVidia GPU to process the center view, and have the integrated GPU do the preview (that’s usually still faster than the CPU). This may need to be configured in darktablerc.

“Very fast GPU” schedules both center view and preview on the main GPU, sequentially, which tends to be slower than a GPU/CPU split, or a split between two GPUs.

2 Likes

Is there any tutorial for that?

Keep this off. I think we should even remove it from preferences and keep it as a darktablerc value.

3 Likes

What’s the best config for a decently fast iGPU? It’s a 96eu Iris Xe packed inside i7-11370h. OpenCL makes noticeable difference, but i don’t know if default vs very fast GPU makes any difference when I tried changing parameters, zooming etc.

2 Likes

If you have a GPU and a CPU, the default scheduling should render the center view on the GPU, and in parallel, the CPU should render the preview. This is as it should be.

An iGPU is not as fast as a dedicated GPU, but still offloads work from the CPU, and is using OpenCL to execute more efficiently than the CPU code paths. At least for modern iGPUs, that is, which are decently performant.

The point is to allow parallel computation of center and preview, with the faster device doing the center and the slower the preview.

1 Like

There is another limiting factor for iGPU. It shares the system memory. If not enough memory, then dt will need to tile. Tile is a a huge resource drain.

The best approach is to use -d perf and compare different setups to have an objective evidence.

2 Likes

On the other hand, memory transfers between GPU and CPU are free with shared memory. From my own anecdotal testing, this seems to actually favor iGPUs with modern memory and modern GPUs (LPDDR5x, DDR5, ARM Mac), as memory transfers are otherwise quite expensive.

But yeah, at the end of the day, one must benchmark and check on one’s own machine.

Right, I always wondered what is a “preview” pixelpipe… so it’s the little image in the top-left corner that’s being processed bu CPU and the center image by the GPU.

It’s the image in the top left corner, but it’s also what the color pickers pick from. I don’t know exactly why this is the case, but perhaps it’s because pickers can exist for all modules, which means intermediate images must be cached for all modules, and that would simply be too expensive at full resolution.

Oh so that’s why the preview updates at a different time? Because it’s processed as a separate pixelpipe (I always thought it’s just a scaled down center view that immediately updates together with center view) and because the CPU is usually slower, the preview is delayed?

I think darktable does not see my integrated gpu as an opencl device. It does not see 2 opencl devices, only the Nvidia (according to darktable-cltest).

Strangely, in this respect the new laptop is different from the old one.