Wiring darktable with Krita [nsfw]

You are right, the Lua script is very limited. But it is what is possible at the moment without changing the darktable code. A real nice solutions would be a module that starts Krita (or any other program) and a method to attach or link files to a photo.
Like @betazoid wrote, it would be useful for gmic too. And I can think of having gegl, imagemagic, vips or even blender in there too.
Perhaps the most flexible solution would be a module that starts a Lua script. Then you don’t need to hardcore any program. I like the idea of simply running programs via CLI, because it is simple and easy to debug. But at the same time you need to work with files and that is slow. Some kind of direct communication would be much faster but needs more work on both sides. (I’m just brainstorming.:thinking:)

1 Like

If you did it with lua, you would need a function to take the krita output and inject it into the pipeline. It would be nice if the function was not krita specific so that we could use the injection for other pieces of software, such as gmic, imagemagick, gimp, blender(?), …

However, if we do this then we cross the line of lua isn’t supposed to be used in darkroom mode.

Bill

1 Like

@Tobias, you are on the same track that I am, just quicker :smiley:

I wonder if the other way round would make sense as well or even more sense, that you have a darktable layer (or several) in $pixeleditor. That would maybe mean to better separate the raw processing from the library such that you can have the former alone. And the pixel pipe would have to accept either an input file or the pixel data from the layer below. Anyway, just thinking out loud … In adobe world I think they have both options. But then you could have darktable layers embedded in $pixeleditor documents embedded in darktable pipelines embedded in …

Krita does support file layers already, so in part, perhaps the OP could allow import of darktable files.

@Carmelo_DrRaw Any inputs since you made Photoflow plugin for Krita?

If I remember correctly someone was working on adding gmic filters to darktable and had implemented a few. Their code might provide a starting point. Darktable PR 2557

To be clear, I don’t want filters (GMIC, GEGL or else). I believe filters (whatever they are) should be integrated in darktable as stand-alone modules, in order to wire them to the OpenCL pipe when available, and optimize them manually because these libs are usually painfully slow and don’t always use the same data structure as darktable.

What I’m after is a way to integrate painted layers (stuff that can’t be written as filters) in the scene-linear part of darktable pipe, meaning bypassing every conversion to integer formats or colour management massaging, to paint with linear light without having to export, import, re-export, re-import, duplicate, check encoding, check colour space, read some doc, ask questions on a forum, do stupid things without knowing it, claim that the feature doesn’t work, argue that darktable is garbage, etc.

For me, the photographer workflow ends up at the export step, which is better handled by darktable, as well as the file management and EXIF business. So darktable is the core of the workflow, and it makes more sense to squeeze Krita in darktable than the opposite.

Lua is more than limited since it doesn’t allow to deal directly with the pipeline, but only with I/O. That means re-importing Krita/Gimp output as a standalone picture in the collection, and finishing up the work. It duplicates digital assets and doesn’t sound like streamlined workflow, since you need to split your edits between 2 files, and manage updates manually.

Also, I’m pretty sure users will keep their bad habits of exporting darktable history to Gimp/Krita including the non-linear transforms, in 8 or 16 bits integers, which will trigger a whole load of issues if they do compositing and blending, but most of them don’t know it.

The point of branching Krita directly on darktable’s pipe is I can force the operations to happen before the non-linear transforms and prevent a whole set of troubles very few users are aware of.

3 Likes

That’s an important point, for several reasons. I agree with you that for dodge/burn/heal etc. this approach makes most sense. But I would guess that once the power is given to the people, they will use it for any purpose you cannot think of. But as long as the integrated $pixeleditor can be at any position in the pixel pipe and can read external resources, I think this will suit most use cases. “Linking to” more than one image from darktable would be a great addition, e.g. by representing the actual image as one layer, but having the ability to drag additional images from darktable into the same “document”.

The point is, that the end-to-end workflow (raw image from camera → exported image) does work most of the time, but there are many use cases that do not (yet) fit into it.

My personal use case is e.g. editing of scanned film negatives with infrared scratch detection information in the alpha channel. For now, the workflow is cumbersome, gimp+g’mic for scratch removal with my g’mic plugin and using the inpainting power of g’mic, then into darktable for correction and colour look. I would therefore try to get the $pixeleditor module before invert in the pixel pipe and make it load and preprocess the image. That would fit as long as I would be able to load images directly and use $pixeleditor as a source.

Another example would be compositing. There, it would be handy the other way 'round, to have several source images from darktable added into one $pixeleditor document, which is again managed in darktable.

Just some thoughts …

Sure, but let’s focus on the problem scope defined here. What users will do ultimately is their responsability and will void the guaranty if they don’t comply with the guidelines.

Indeed, good idea, not sure how to bend the UI to make that not aweful though, but I will keep it in mind.

Compositing in scene-linear is definitely in the scope here.

1 Like

Luckily the module did not recognize this yet and still accepts my tiff input files (without alpha channel). As many people report, it even works much better with real scans than with raw files.

My bad, I just checked the source code, it works on demosaiced/non-mosaiced data too.

Hm, it is not that bad, I still use VueScan (1 of the 2 non-libre softwares I still use), but its inpainting is much worse than what g’mic offers. And for tiff files the invert module of darktable works great, it just seems broken for raw data.

Kids were crying and coughing the whole night, so I had some time to think about things while trying to get them into bed again. A prerequisite for all of this is a mechanism to cache intermediate results of the pixel pipe permanently, and have some mechanism to check easily if things changed. E.g. this could be done by storing a checksum of the serialized parameters of the modules below, and recomputing on demand if checksum is not up to date and user opens the external editor via the module. I think eventually there may be 2 different modules, which might share the majority of code but present the user different use cases:

  1. The through module: Reads the (cached) output of the pixel pipe below, feeds it as layer to the external editor, and reads back the output of the external editor. Image size may change through this procedure. The through module should have an option to bypass the pixel pipe below and directly load the file with the pixel editor (e.g. for loading an alpha layer).
  2. A branching module. This module can sit anywhere in the pipe and will permanently store the intermediate result of the pixel pipe at its position if parameters below changed. It also adds a handle into a table in the data base that allows to re-use this result by the external editor. The external editor has special layers that read these intermediate results, such that several of these outputs can be added to a composite. Some UI to select these would be required in the external editor.

Hash functions should be used at several places to ensure that there is nothing linked by file names only and that therefore resources can be shared among people, or reconstructed after some crash.

The combination of both features would make the whole thing extremely flexible, while the functionality is shared code-wise, only different UI.

Don’t know if this makes sense, but dreaming is allowed I think …

OK, then lets come back to your problem “Wiring darktable with Krita”.

@anon41087856

  1. Do you have an example tutorial or YouTube video how Photoshop and Lightroom are working together.
  2. You should talk with @Boudewijn_Rempt (Krita maintainer) and perhaps visit him for the Krita changes.

Things, that that are not clear to me:

  1. How to check if Krita is installed. (On all platforms (MacOS, Windows, Linux, Linux Appimage, Linux SNAP, Linux Flatpack))
  2. How to transfer the image data to Krita (File, Memory-mapped file, D-Bus or something else?)
  3. In Krita the base layer must be locked (Doesn’t sound to hard.)
  4. The “rendered” image needs to be transferred back to darktable. (Do you plan to save the rendered image somewhere e.g. in the xmp or is it rerendered every time the image is opened?)
  5. The Krita file needs to be saved somewhere to. (Inside the xpm or as single file.)
  6. For the pre Krita changes Krita needs to run for every change (slow) or even better keep it open invisible in the background. (Sound better, but needs more resources.(RAM))
  7. In Krita the “Save” functionality needs to be pimped.

I made a small list with pros and cons. I’m not sure if it is useful.:
Pro:

  • You will get the best solution for your problem
  • Small scope
  • Full control

Con:

  • Not very flexible
  • Will not work with other tools
  • I don’t expect to many users with this setup (darktable, Krita, OCIO)
  • Needs changes in Krita and darktable

@anon41087856 Any updates pertaining to this? I’m not expecting anything other than the initial stage of development i.e the idea and theory.

https://cpn.canon-europe.com/content/education/technical/lightroom_and_photoshop_cc_workflow_smart_objects.do

That mechanics is already existent in darktable, actually the whole tone equalizer module works like that. A big difference though is the current caches are at screen-size and we want here to export the full-size image to Krita.

This has little interest too me since a Lua script can easily do it, and a module is not really needed for that. Dealing with I/O is easy, but will need to duplicate the file in collection/DB.

I was just thinking of storing the intermediate output of darktable pipeline at module level as the base layer of a sidecar Krita file, in the same spirit as the .XMP, e.g. in the same directory as DSC_000x.raw and DSC_000x.xmp, store DSC_000x.kra. Let user save and quit Krita.

Back to darktable, with some refreshing button in module, check the .kra is available, process it through Krita CLI and store its output in an hidden .DSC_000x.kra.pfm as 32 bits float. Inject that back into darktable pipeline and run the final modules. Finally, get 3 hashes:

  1. the one of darktable module params prior to the Krita module
  2. the one of the .kra file
  3. the one from the .pfm cache.

Then save them to darktable DB (in module parameters), and as long as all hashes don’t change, keep using that cached .pfm. Otherwise, invalidate and refresh the base layer in .kra and the cached output .pfm.

That way, we can bypass the database (that will be a mess if 2 instances of darktable are used on a cloud-hosted file) and avoid file path updates and such. Also, all Krita files stay available on the disk to be opened directly from Krita later, and automatically updated in darktable at next opening.

Also, for now I would be careful to allow multiple synchronisation of pipelines to a single file because I fear the layers synchronisation could backfire in strange ways. I would let users do the compositing manually (importing files as new layers) and use the background image (for example) as a master file upon which the darktable tone mapping + colour conversion will be applied last.

I fear the shared code would be just a couple of lines. IOPs need way more checks than I/O modules.

Given Krita can exist under many forms (system-wise or user-wise), I think it will be best to check for usual /bin places then let users input the path of the bin if nothing is found. Note that I have no experience of dev with Win/Mac.

Krita has an auto-save feature, so as long as the file keeps changing, darktable will see the hash changing and will update its cache accordingly.

Yeah but I don’t really get where the whole “Gimp is the opensource Photoshop” comes from. The cool features of Gimp essentially duplicate darktable ones, minus the non-destructive part and the batch editing (why on Earth would you work on photographs encoded in 8/16 bits integer with TRC/OETF/gamma on top ?).

So the only thing that Gimp does and darktable doesn’t is the painting job, but at this game, Krita is much more mature, way more polished, but somehow under-advertised. Let’s hope it will change once people discover it. We stand at the dawn of a new workflow, scene-linear, more robust, more consistent and more straight-forward, it’s time to reinvent a consistent and minimalist workflow. Some people out there rely on their camera to make a living, they need a no-nonsense way to work faster.

4 Likes

I have a question on the master layer part. How would you deal with Krita filter masks being applied to the master layer. Filter masks are essentially adjustment layers, but the filter aren’t treated as layers, but more so filtering the pixels within the layer.

The whole .kra file (layers and filters) would be flattened through Krita CLI and sent do darktable as a .pfm or .exr 32 bits float.

Could you show 1 sample (ready) photo of Darktable + Krita? I was curious about your futuristic vision and in fact we will have a more polished workflow

Hm, loading an image itself may be easy, but I am talking about something else. This would be a simple addition to your module but with big impact. Your module will load a cached image and deliver a cached image. What I have in mind is a simple checkbox that tells darktable to not load the underlying image and it tells krita to load the image by its native mechanism. That way you could e.g. load a multi layer tiff as layers, edit it in krita, and run the flat output through darktable for colour grading. Another example would be a film negative workflow: Load the image with alpha layer in krita, use the scratch removal there to in paint the scratches, and pass the flat output to darktable for inverting and further processing. This does not sound too complicated to me, just turning some functionality on or off that is there anyway once your module is there.