What is the maximum image size (and file sizes) supported by DarkTable?

Hello,

I’m a beginner with DarkTable and I’m looking for information about the maximum image size (in pixels) that it can handle, as well as the maximum file sizes (per file type) it can handle.

In case other users have spotted this information, from a credible source, I’d be grateful if you provided me with the relevant links.

I’m currently having issues (crashes) when attempting to process a large HDR panoramic photo (94912 x 28845 pixels, i.e nearly 3 gigapixels) from an EXR file (5 GB) that I can open successfully using GIMP, so I need to better understand whether there’s an implied limit by DarkTable or it’s bug that I should report.

Many thanks.

How much RAM do you have in the machine?

The machine has 32GB RAM, which I am aware may not be sufficient for images of this size (in DarkTable’s way of representing pixel information internally), but the software should still not crash, i.e. in case it needs more memory than the system can allocate, should pop up a meaningful error message.

Prior to upgrading my PC with additional RAM, I need to know whether DarkTable supports such large images, otherwise it would be a waste of money.

I’ve never heard of an image size restriction, but more restrictions by your hardware, RAM and CPU bottleneck

2 Likes

If you define “crash” as the application closing unexpectedly then I think you’ll find this is a function of the OS, rather than the application.

When an application allocates memory, modern OSes will return success even if the amount requested exceeds the amount available. When the application attempts to use this memory (that the OS said was available) the OS flags an exception and closes the program (or some other background process, if available).

Simply put, the application can’t know to pop up a meaningful error message when it attempts to allocate RAM because the OS lies to it.

I think this is very vague. What does the crash report say? What’s in the log? What OS are you using?

Wow! What you are going to do with that image? You could print 8 meters wide at 300 dpi :astonished:.

did you try photoflow on it? it’s based on the vips processing library, the main idea behind that is tiled processing for practically unlimited resolution, as far as i remember. it never seemed the obvious use case to me in the context of photography, but this seems to be exactly the match here.

2 Likes

Disclaimer : I’m not a developer of darktable but reading the manual https://darktable-org.github.io/dtdocs/en/special-topics/memory/ one can find

If you have a 20MPx image then, for precision reasons, darktable will store this internally as a 4 x 32-bit floating point cell for each pixel.

I guess this also applies on larger images. So you will definetely run out of memory on your system with this huge image. To analyze the real cause of your issue you will have to run darktable in debug mode from the command line (option -d all) to create a log. If darktable really crashed you possibly find a backtrace in /tmp (linux path).

1 Like

At that size you even don’t need 300ppi because the viewing distance will be huge. I guess 100ppi will do, so that gives a print of about 24x8 meters. Can printers (those who print) handle that?
(Anyway, glad I don’t have to pay that print :wink: ).

There are many gigapixel panoramas online. My guess is this one isn’t intended for printing, either.

Thanks. I found this info and it’s very useful towards estimating the size of an image (4 x 32bit FP per pixel), but the rest of this page is talking about the challenges rather than about how DarkTable manages memory usage.

For example, in the same manual page, it also says that “as we want to process the image, we will at least need two buffers for each module – one for input and one for output” but it doesn’t clarify how DT manages these buffers.

The OS is Windows 10 and the error message comes and goes quickly. It’s an “unexpected exception” and most likely caused because DT can’t reserve as much RAM as it needs to open the image. I tried running DT in debug mode but don’t know where to find the log file. It’s not found in the location mentioned in DT’s FAQ.

The file is in that location, but the folder is hidden one. You need to type the address.

I meant to reply to @Thomas_Do and to @paulmatth regarding printing, and should have used the “reply to comment” button instead of “reply to topic”. Sorry about the confusion. If you check my activity on the forum, you’ll see that I try to be helpful and to play nice.

4 Likes

@kofa thanks for clarifying. Glad it was my misunderstanding.

@hanatos I’ll check this out.

@g-man the log files aren’t there, hence, I can only assume they were not created in the first place. In the meanwhile, I did a bit more research and decided that 128GB RAM would be beneficial for some workflows, I upgraded and, subsequently, DT opened the 3 gigapixels file without any issues.

In case this is useful to other people, these are my initial observations:

  • 32GB RAM allow opening a 2 or 3 gigapixels image using Gimp2, but it will be very slow even with a fast SSD, as Windows relies on the page file and constantly swaps memory pages.

  • On a system with 128GB RAM, utilisation reaches ~70GB RAM and drops down to 50GB RAM when cropping the image down to 2 gigapixels. Hence, 64GB RAM should be sufficient to load images of this size.

  • However, processing such big images often requires software modules to create two copies, one to be used as input and another to be used as output. Hence, 128GB RAM is recommended in these cases.

  • What I hadn’t thought of: as I upgraded from a 2-DIMMs (2-channels) configuration to a 4-DIMMs (4-channels) configuration on the X299 platform, the RAM’s maximum transfer rate doubled for multi-threaded workloads. Simple everyday tasks like switching between web browser pages, or scrolling down long web pages full of images, or running multithreaded image processing modules, are now visibly/measurably faster.

3 Likes

don’t forget to reserve a significant amount for the gpu with sufficient memory. The more tiling the less responsiveness…

1 Like

There are no inherent restrictions to image size except some filetypes might not support that (That is currently not checked by dt “in the best possible way”.

Otherwise, dt requires 32bytes for input and output buffers per image pixel to hold data to read & write from/to. So a 100Mpx image would require 3.2GB at least for these buffers.
Plus you need some memory for the modules processing. Most likely you will be fine with again 3.2 to keep internal processing “under control”

So, as a rule of thumb 6GB of ram per 100Mpix will be fine.

So your image would be “fine” with 180GB of ram which practically no one has. So dt would have to rely on the OS capabilities - swapping memory. That behaviour depends on your OS and/or the way your linux kernel handles this. Late main distributions might just kill the app …

3 Likes

The real (vague) answer is that the more resource intensive the work the more support a system would need. In my limited experience, I find that resource use is not linear. The requirements increase as we reach system limits. I encounter this problem all the time because I have an extremely low-end system, so I am constantly making trade offs to optimize my workflow and maintain a healthy overhead so it does not crash randomly.

In conclusion, sure you could ask about maximum size, but approaching it from the corollary minimum requirements point of view is moot because the experience would be terrible. So, the strategy is generally, how expensive of a system can you afford? :dollar:

@MStraeten not sure what you mean by “reserve significant amount for the gpu with sufficient memory. The more tiling the less responsiveness”. Is there a thread where this topic has been discussed, or an article I can read about this?