The same question came to my mind. Maybe it´s just not displayed, will try with -d all.
Antoher thing which I encounterd. I loaded the sidecar to a picture of my nikon d7100 (6000x4000) vs. gpagnon´s pic with ~ 4000x3000. The nikon pic could be exported in 160 seconds. In the same session gpagnon´s pic is still in export since 20 minutes.
At least it tries to do tiling, but on my computer without succes:
22.583654 [default_process_tiling_cl_ptp] aborted tiling for module 'diffuse'. too many tiles: 5890 x 4018
22.583677 [opencl_pixelpipe] could not run module 'diffuse' on gpu. falling back to cpu path
22.583698 [default_process_tiling_ptp] gave up tiling for module 'diffuse'. too many tiles: 5890 x 4018
I think it is simply because by the change of host memory to 0 (not limit to the available memory), the system can use and load the entire image to memory without the need to tiling. But because the openCL doesnt have enough memory, it still needs to do the tiling. It is creating so many tiles and processing all of them and storing the results to then merge them. It the end it is too much, thus failing it to CPU makes sense in your system.
Increasing the headroom forces the system to leave more GPU memory available for other tasks (per the manual), therefore when the diffuse tries to use the GPU it notices it doesnt have enough memory and changes to CPU.
I would like for someone with more knowledge around when/how DT uses the tile to chime in.
I am not sure this is my case though, as I don’t recall (I am not on that machine now) seeing messages about failed compilation of opencl kernels in the terminal.
Thanks, closed the issue. That’s what I seem to be doing these days: open a feature req, then realise it’s already done. But then why did @gpagnon have the issue? Shouldn’t PR 9764 have taken care of updating the memory limit param?
I think they bumbed DT_CURRENT_PERFORMANCE_CONFIGURE_VERSION from 1 to 2; and if darktable detects that the one in darktablerc is old (1), it prompts the user:
Interesting, because I am pretty sure that when I installed 3.8.0 dmg from darktable website, in response to that message, I consented to having my old configuration updated by the installation.
I tested the setting of configuration version set to 1. I restarted darktable and it did ask to perform the logic change. But the system selected only 8Gb instead of the 16 I have.
The logic is base on memory and CPU cores (threads), otherwise it keeps the setting as before (1500). That means that you need a CPU with 5 core (greater than 4), for this to run. That is not very common (I have 6). The next step in the logic is
I do have the memory, but I have 6 cores and not 7, therefore it is not going to follow this path.
The OP has a quad core, therefore the logic update did nothing for him. If I understood the code correctly.
Therefore, I think we do need a Pull Request. I would propose to eliminate the CPU core as part of this memory logic since I dont see them connected and it is setting the bar too high. Maybe make it greater than equal to 2, but definitely do use greater than 4.
I agree. It sounds silly not to use the memory if it’s available. If you have few cores, and darktable switches to tiling, those few cores will have to work even harder. (Threads may not be the same as cores, because of hyperthreading, but I have not looked into how the number of threads is determined.)
Update
I think they do it like this (checking both installed memory and threads) because they don’t just tweak memory usage parameters, but also decide on settings that affect the choice of algorithms to use: demosaicer for zoomed-out mode in the darkroom, and ui/performance (‘prefer performance over quality’), which seems to affect:
I was going to raise an issue/pull request to modify the code, but I’m in no rush. I started to read the actual code in master to see if there are more changes since that last pull request. I think I noticed some other changes that use the >= 2, so it needs more investigation. I’m currently busy with work.