darktable workflows: from copying values to applying intentions

I have been using darktable for over three years, and while I have optimized my open-source photography workflow, I feel we are still missing a powerful way to manage complex workflows.

Currently, we apply edits by copying and pasting the history stack.
However, this process applies specific numerical values rather than image-specific logic.

For example, if I use the exposure picker on image A and get a value of +3.030 EV, copying this to image B will apply exactly +3.030 EV. In most cases, image B needs a different adjustment based on its own histogram, not the fixed value from image A.

The intention was “set exposure using the picker”, but the result is a static number.

The Darktable-Initial-Workflow-Module addresses this by providing a simplified UI for some parameters, but it only covers a small selection of module functions.

I am proposing a Lua plugin designed to execute custom, user-defined workflows based on a modular architecture:

~/.config/darktable/lua/dt-workflows/

├── dt-workflows.lua            -- plugin entry point
├── engine.lua                  -- logic to apply settings to images
├── ui.lua                      -- user interface (buttons/panels)
├── workflows/                  -- selection of workflow strategies
│   └── agx_workflow.lua        -- e.g. agx color workflow definition
│   └── bw_workflow.lua         -- e.g. black and white workflow definition
└── modules/                    -- definitions for individual modules
    ├── color_balance_rgb.lua
    ├── exposure.lua
    ├── agx.lua
    └── ...

In this system, a workflow is a Lua instruction file containing “intentions” rather than fixed states. For instance, an AgX workflow definition would look like this:

Lua

-- agx_workflow.lua sample
local workflow = {
    name = "agx scene-referred workflow",
    description = "agx-based workflow automation",
    steps = {
        {
            module = "exposure",
            params = {
                mode = "auto_picker", -- intention: use the picker, not a fixed value
                target_exposure = 0.5,
                compensation_bias = 0.0
                -- other parameters
            }
        },
        {
            module = "color_calibration",
            params = {
                mode = "auto-picker",
                illuminant = "D65",
                adaptation = "CAT16"
                -- other parameters
            }
        }
    }
}
return workflow

This workflow could be triggered from both the lighttable and darkroom views.

In the lighttable, a user could select a workflow from a dropdown (reading from the workflows folder) and apply it to a batch of selected images with a single click.

In the darkroom, it would apply the workflow to the current image.

The modules/ folder is a key part of this vision. It should contain updated definitions of individual darktable modules to ensure that workflows do not break when core modules are updated. By abstracting the module logic, we ensure long-term stability.

I believe this approach would allow photographers to handle large batches of images with extreme precision and speed, automating the repetitive technical setup and leaving more time for creative editing.

I am not a professional developer: my experience with Lua is mostly limited to Neovim scripting. However, I am a power user who cares deeply about the darktable ecosystem and productivity.

My goal here is twofold: first, I want to understand if other users feel the same need for intentional workflows. If there is enough interest and the community thinks this could be a valuable feature, I would be happy to open a formal feature request on GitHub with a detailed functional specification.

Secondly, I am looking for developers who might be interested in collaborating on this. I can contribute by testing the engine, defining user needs, and refining the workflow logic, but I need help from the experts to build a robust architecture that can survive darktable’s rapid development.

I would love to hear your thoughts!

Is this a direction worth pursuing, or do you see better ways to achieve this level of automation?

5 Likes

The idea sounds nice but I am not sure if it will work in practice for my images. I tend to use styles in DT to get my edits started. The styles I have created to have a starting point similar to the cameras display (or more correctly the camera’s JPG). I guess to some extent this is applying an intent to the image.

I wish you luck with your endeavours and will follow the discussion/debate on this.

I shoot sports, thousands of images, hundreds of edits.

I’ve been working on/using workflow automation with Lua for the last couple of years.

My current workflow is based around styles, presets and “recipes”.

When I cull I select an image, then I apply “processing” tags for things such as

local CONDITION <const> = {
  "clear",
  "cloudy",
  "fog",
  "mist",
  "rain",
  "shade",
  "open shade",
  "snow"
}

local EXPOSURE <const> = {
  "correct",
  "under",
  "over",
  "backlit"
}

I take the date from the image and calculate the solar data and cache it so that I have golden hours, blue hours, etc available.

local LIGHTING <const> = {
  "night",
  "blue hour",
  "golden hour",
  "morning",
  "noon",
  "afternoon",
  "stadium",   -- night, check the location
  "studio",     -- condition doesn't apply since it's indoor
  "indoor",
  "mixture"     -- under cover but outside
}

So a recipe is recipe[CONDITION][LIGHTING][EXPOSURE]

I have 3 styles, one for sports, another for portraits, and a general purpose one. Each style has the modules I use, in the order I want them, and multiple module instances if necessary. There is a matching preset in the modulegroup lib so the list of modules is restricted to just those I use.

When I load an image into darkroom the script reads the path name and

  • applies the appropriate modulegroup preset
  • loads the appropriate style
  • applies the appropriate recipe based on tags and computed conditions
  • applies lens correction based on lens and camera
  • calculates the roll angle of the image and straightens it
  • if the site is indoors, the appropriate color calibration preset is applied
  • check the ISO and apply the appropriate demosaic method and settings

Then I edit the image using shortcuts to trigger crop and tone equalizer to pop up the module then hide it when I’m finished.

After that I apply the final processing which is mostly the heavy hitters such as diffuse or sharpen. I apply

  • denoise profiled with the appropriate preset based on camera and ISO
  • diffuse or sharpen AA preset with number of iterations altered based on lens and camera
  • colorbalancergb preset
  • diffuse or sharpen local constrast
  • final contrast (if needed).
  • currently I’m incorporating AgX so for now I open the module in case I want to do a last tweak.

Then i hit the spacebar and move to the next image. When the next image is loaded the script triggers a lighttable thumbnail refresh so I don’t have to wait for them to regenerate when I return to lighttable.

My workflow is based around my subject, my equipment, my locations, my preferences, etc.

3 Likes

What darktable and Lua can and can’t do

There is no way to edit an image from lighttable. You can apply a history stack, style, or xmp file to an image. Editing an image can only be done in darkroom.

Lua can’t edit an image directly. It can

  • apply a style in lighttable
  • apply an xmp file in lighttable
  • apply a style in lighttable
  • manipulate the GUI in lighttable and darkroom.

So, it is possible to “edit” an image in darkroom by manipulating the GUI. It’s possible to outrun the processing so you have to wait for the first operation to complete before initiating the next. You can either do sleeps, or monitor the pixelpipe complete event. Sleeps are somewhat system dependent, so I favor monitoring the pixelpipe.

Problems

The biggest roadblock is getting enough information to make intelligent decisions. You can query settings in the modules to get values, but there is no way to “look” at an image and determine what it needs (hence my processing tags).

You can crash darktable with Lua and if you start down this road you will :grin:

1 Like

This sounds super cool!

1 Like

That makes me think there may also be a possibility to improve the workflow before getting to darktable: if you systematically need to apply that kind of exposure corrections, you either use exposure correction “in camera” and don’t allow the exposure module to correct for that, or you might perhaps apply the correction in camera and not allow darktable to correct for that.

Even that, or setting exposure with the picker doesn’t give me what I want in many cases…

I don’t really see an automated workflow get things exactly right in every case (or even most cases). And as soon as you have to make adjustments anyway, applying any value to get you close is “good enough”. (of course, if you are working under strictly controlled conditions, automated editing does work, and you probably want to use the same edit settings on the whole series)

(In passing, copying history stacks doesn’t seem to me like the best way to apply a standard workflow. Styles seem more appropriate (although it does come down to more or less the same thing)

1 Like

I’ve been thinking about this kind of thing, I decided that it might get complex to press all the buttons i normally press with a script, I thought about doing this in stages, so exposure, sharpening, contrast, colour adjustments etc, maybe bound to keys

most of the time i use the same modules, i press a lot of eyedroppers, so would be useful to do that in chunks rather than press all the buttons myself

1 Like