linux laptop darktable 2024

First of all, I have read all the other posts on the subject (I also am well aware that I am repeating myself here – but it’s been a few years and I have lost touch with the current situation regarding both new hardware and updated drivers). Sorry if “my topic is similar to…” but I am really in search for some friendly advice.

I need to replace my current laptop, a generic 15" gaming laptop with Intel Core i7-10875H with 32Gb of ram, a Nvidia Geforce RTX2070 8Gb, 1TB SD Samsung 970 Evo Plus etc. I’m listing the specs because obviously this is my reference; this laptop still feels very fast for me, speed is not an issue here – but it is very heavy so the replacement will have to be equally fast (if it’s only a bit slower that would be fine too) but has to be much lighter (I’d say around 1.5Kg max).

Two essential requirements: need to run darktable and Davinci Resolve (which I install under Ubuntu using makeresolvedeb).

The alternatives i’m evaluating are:

[System76 does not seem to have a smaller/lighter laptop with nvidia gpus]

The first basic question I have is about AMD cpus: the Thinkpad P14s Gen 4 for example can be ordered with an Intel Core i5 1350P vPro 13th gen plus an Nvidia RTX 500 4 Gb (price: ~1570 euros) or an AMD Ryzen 7 PRO 7840U (~1500 euros). In this case prices are similar but there is always a lower cost with AMD cpus. They don’t come with an extenal GPU though so:

  • will this AMD cpu run darktable as fast as the intel cpu with that Nvidia card (which I bet is a joke compard to the RTX2070 I currently own)?
  • is the internal AMD gpu recognized as such by Davinci Resolve (until a few years ago only Nvidia GPUs were recognized and allowed you to run Resolve under Linux-Ubuntu)?

I mean I could go on but this is essentially all I need to know. If laptops with only AMD cpus are not running Resolve then I can just delete a (large) number of options from my list – which is a shame considering they’re often cheaper and used by many manufacturers for lighter laptops.

Also if the RTX A500 Nvidia is really so underpowered compared to my current setup I will also have to cross those Lenovo laptops from my list and go for Tuxedo or Framework(*).

(*) incidentally I only got to know these from something read today – does anybody here have first-hand experience to share?

For Linux, Intel/Nvidia is (sadly) recommended. You will want CUDA support.

1 Like

Edit: originally, I wrote that my CPU is a Ryzen 5 3600X, but it is an 5600X.

The A500 is about half the speed of the 2070, so that wouldn’t be so bad (my 1060, which is about 50% faster, so halfway between that laptop GPU and your 2070, is 5-10x faster than my Ryzen 5 5600X CPU). However, it has half the RAM, so you can get more tiling, which can slow down processing dramatically.
https://www.videocardbenchmark.net/compare/4649vs4001/RTX-A500-Laptop-GPU-vs-GeForce-RTX-2070

The AMD CPU in the new laptop seems to be > 50% faster than the Intel in the old one:
https://www.cpubenchmark.net/compare/5319vs3726/AMD-Ryzen-7-PRO-7840U-vs-Intel-i7-10875H

It’s also about 40% faster than the Ryzen 5 3600X, which I referenced here before.
I then remembered that I have a Ryzen 5 5600X, not 3600X, but even that is a bit slower than the new AMD Ryzen 7 PRO 7840U.

2 Likes

Additional summary: so, that would mean you get a GPU that can be about 5 times faster than your CPU, provided it does not run into tiling (which can easily occur with higher resolution images and more extreme diffuse or sharpen settings, but you should be OK if you don’t push it hard). You may want to read this: Export times on iMac and Mac mini with diffuse&sharpen.

1 Like

Not for darktable, which uses OpenCL.
NVidia does seem to be easier when it comes to installing the OpenCL drivers, though. (and other programs may require CUDA, which is an NVidia-only technology, afaik).

2 Likes

Thanks everyone for the comments, really useful!

I did some other digging and realized that in fact my choices are very limited! Neither the Tuxedo Pulse 14 nor the Framework 13 have the nvidia GPUs so I’m stuck with Lenovo Thinkpad P4s and the crappy RTX A500 with 4 GB.

I’m also forced to get a Gen 5 which is the only one available in a few days… specced with 32Gb of RAM, a 1TB ssd comes at a rather hefty price of ~1800 euros.

I’m considering to order it and then try it for 2 weeks (free return) and see if I can indeed see any performance difference.

Which may not be as crappy as you’d think. I really think it depends on the image size and your processing.

Exporting a 20 MPx image with my usual settings (2 instances of diffuse or sharpen, one with the preset local contrast: fine, the other with sharpen demosaicing: AA filter):

Without OpenCL: pixel pipeline processing took 14.170 secs (136.335 CPU).
With OpenCL, and darktable’s resources setting to small, and a maximum of 2888 MB GPU RAM consumption, as reported by nvidia-smi: pipeline processing took 11.255 secs (17.801 CPU).
With normal: 4594 MB GPU RAM consumed (pixel pipeline processing took 6.069 secs (10.132 CPU)).
With resources set to large, my maximum GPU memory consumption was 5.8 GB (pixel pipeline processing took 4.355 secs (5.542 CPU)), with darktable and Firefox running on KDE; the baseline was 793 MB, when everything was idle). Xorg consumed 403 MB, Firefox 217. With a lighter WM/DE, maybe Xorg’s usage could be reduced, and one can close the browser while editing.

1 Like

@aadm wants darktable and Resolve compatibility. darktable works well with Nvidia and AMD graphics. Resolve works well only with Nvidia. Nvidia makes the most sense?

If you haven’t made a decision yet - system76 is another option. Even if you don’t buy from them - at the very least you know what their configurations are.

From the opening post:

Sorry - I missed that.

imo aside of the performance there may be other important parameters if you plan to use the builtin screen, like e.g. gamut, min brightness, pwm frequency (if used)…

Screen quality was another of the key features I had in mind back in 2020 but then the reality is that most of my processing is done back at home when I connect the laptop to an external monitor which has wide gamut, it’s calibrated (well sort of…) etc so the laptop screen is really something that I use for editing/selections or other everyday tasks.

But thanks for reminding me that, Lenovo does have some options for better screens (even with touch features which I don’t really need) so I’ll have a second look at the price differential for gettting a wide gamut display.

1 Like

Thanks Kofa!

… I do hope it’s an exxageration and I will found no great difference (maybe compensated by the 1.5x speed of the main cpu). In reality I never use diffuse & sharpening which seems to be a particularly heavy process — my standard preset consists of a tone mapper (sigmoid or filmic) plus color balance and tone equalizer, denoise when I need it and not much else…

(I am almost set on the decision to get the P14s thinkpad, I’m traveling now and by this evening I’ll place the order if nothing else changes — like a sudden price increase! I will then update this thread with my thoughts…)

1 Like

On laptops with NVidia/Intel hybrid graphics there is also Optimus technology integrated in the driver (at least on Ububtu). Running with the option “on demand” the discrete GPU only activates when you explicitly tell it to, like in Windows (any OpenCL or CUDA calculus is ‘explicit’ in this sense, no need to modify anything). So you just open darktable as usual, the screen and DE runs on the integrated Intel, but the OpenCL heavy lifting is done on the NVidia card. I have it configured like that and the GPU is idling at 40-50MB.

2 Likes

Brief update: 2 days ago I received the Lenovo laptop, after a rather interesting trip from Heifei to Shangai (China, 29 June), then Anchorage and Louisville in the US (29-30 June), finally Koeln (Germany, 1 July) and Milan, Italy…

It’s a P14s with an Intel Core Ultra 5-125H, 32 Gb of ram and a Nvidia RTX A500 with 4 Gb. I had issues installing Ubuntu which was rather surprising to me… tried 22.04LTS, then 23.10, then I decided to download the newest LTS release (24.04) which I didnt really want to use since it was so new but hey, that one worked!

Transferred all my files, installed all the applications and finally yesterday I got around to play with dt 4.6.1 – installed from OBS.

The laptop feels very fast, faster than the old one (specs in the first post), and it’s lighter (not as light as I was hoping for however; 1.6kg vs 2.2). It’s definitely more portable however.

I did some benchmark with the usual bench.SRW test image and this is the comparison over 3 runs with and without openl:

(base) aadm@darksl8:~/Pictures/darktable_test $ ./dt_bench.sh 
>> OPENCL: YES
run 1:      5.1021 [dev_process_export] pixel pipeline processing took 4.336 secs (10.425 CPU)
run 2:      4.9604 [dev_process_export] pixel pipeline processing took 4.334 secs (10.455 CPU)
run 3:      4.9868 [dev_process_export] pixel pipeline processing took 4.357 secs (10.431 CPU)
>> OPENCL: NO
run 4:      7.1437 [dev_process_export] pixel pipeline processing took 6.570 secs (87.024 CPU)
run 5:      7.5325 [dev_process_export] pixel pipeline processing took 6.956 secs (86.992 CPU)
run 6:      7.5131 [dev_process_export] pixel pipeline processing took 6.942 secs (86.801 CPU)

As a comparison, this is from the old laptop (geforce rtx2070 8 Gb and intel i7):

(base) aadm@psion:~/Pictures/darktable_test $ ./dt_bench_custom.sh 
>> OPENCL: YES
run 1:      3.5394 [dev_process_export] pixel pipeline processing took 2.552 secs (12.229 CPU)
run 2:      3.6524 [dev_process_export] pixel pipeline processing took 2.632 secs (12.500 CPU)
run 3:      3.6010 [dev_process_export] pixel pipeline processing took 2.592 secs (12.616 CPU)
>> OPENCL: NO
run 4:      8.9039 [dev_process_export] pixel pipeline processing took 7.964 secs (116.280 CPU)
run 5:      8.8454 [dev_process_export] pixel pipeline processing took 7.916 secs (115.429 CPU)
run 6:      8.8857 [dev_process_export] pixel pipeline processing took 7.887 secs (114.563 CPU)

So the gpu performance with the new laptop are indeed slower (4.3 secs vs 2.6 secs on average) bu the cpu performances are slightly better.

I will have to make a longer session editing and processing photos and see if I can indeed perceive this difference in real life.

1 Like