Hm, it is not that bad, I still use VueScan (1 of the 2 non-libre softwares I still use), but its inpainting is much worse than what g’mic offers. And for tiff files the invert module of darktable works great, it just seems broken for raw data.
Kids were crying and coughing the whole night, so I had some time to think about things while trying to get them into bed again. A prerequisite for all of this is a mechanism to cache intermediate results of the pixel pipe permanently, and have some mechanism to check easily if things changed. E.g. this could be done by storing a checksum of the serialized parameters of the modules below, and recomputing on demand if checksum is not up to date and user opens the external editor via the module. I think eventually there may be 2 different modules, which might share the majority of code but present the user different use cases:
- The through module: Reads the (cached) output of the pixel pipe below, feeds it as layer to the external editor, and reads back the output of the external editor. Image size may change through this procedure. The through module should have an option to bypass the pixel pipe below and directly load the file with the pixel editor (e.g. for loading an alpha layer).
- A branching module. This module can sit anywhere in the pipe and will permanently store the intermediate result of the pixel pipe at its position if parameters below changed. It also adds a handle into a table in the data base that allows to re-use this result by the external editor. The external editor has special layers that read these intermediate results, such that several of these outputs can be added to a composite. Some UI to select these would be required in the external editor.
Hash functions should be used at several places to ensure that there is nothing linked by file names only and that therefore resources can be shared among people, or reconstructed after some crash.
The combination of both features would make the whole thing extremely flexible, while the functionality is shared code-wise, only different UI.
Don’t know if this makes sense, but dreaming is allowed I think …
OK, then lets come back to your problem “Wiring darktable with Krita”.
- Do you have an example tutorial or YouTube video how Photoshop and Lightroom are working together.
- You should talk with @Boudewijn_Rempt (Krita maintainer) and perhaps visit him for the Krita changes.
Things, that that are not clear to me:
- How to check if Krita is installed. (On all platforms (MacOS, Windows, Linux, Linux Appimage, Linux SNAP, Linux Flatpack))
- How to transfer the image data to Krita (File, Memory-mapped file, D-Bus or something else?)
- In Krita the base layer must be locked (Doesn’t sound to hard.)
- The “rendered” image needs to be transferred back to darktable. (Do you plan to save the rendered image somewhere e.g. in the xmp or is it rerendered every time the image is opened?)
- The Krita file needs to be saved somewhere to. (Inside the xpm or as single file.)
- For the pre Krita changes Krita needs to run for every change (slow) or even better keep it open invisible in the background. (Sound better, but needs more resources.(RAM))
- In Krita the “Save” functionality needs to be pimped.
I made a small list with pros and cons. I’m not sure if it is useful.:
- You will get the best solution for your problem
- Small scope
- Full control
- Not very flexible
- Will not work with other tools
- I don’t expect to many users with this setup (darktable, Krita, OCIO)
- Needs changes in Krita and darktable
@aurelienpierre Any updates pertaining to this? I’m not expecting anything other than the initial stage of development i.e the idea and theory.
That mechanics is already existent in darktable, actually the whole tone equalizer module works like that. A big difference though is the current caches are at screen-size and we want here to export the full-size image to Krita.
This has little interest too me since a Lua script can easily do it, and a module is not really needed for that. Dealing with I/O is easy, but will need to duplicate the file in collection/DB.
I was just thinking of storing the intermediate output of darktable pipeline at module level as the base layer of a sidecar Krita file, in the same spirit as the .XMP, e.g. in the same directory as DSC_000x.raw and DSC_000x.xmp, store DSC_000x.kra. Let user save and quit Krita.
Back to darktable, with some refreshing button in module, check the .kra is available, process it through Krita CLI and store its output in an hidden .DSC_000x.kra.pfm as 32 bits float. Inject that back into darktable pipeline and run the final modules. Finally, get 3 hashes:
- the one of darktable module params prior to the Krita module
- the one of the .kra file
- the one from the .pfm cache.
Then save them to darktable DB (in module parameters), and as long as all hashes don’t change, keep using that cached .pfm. Otherwise, invalidate and refresh the base layer in .kra and the cached output .pfm.
That way, we can bypass the database (that will be a mess if 2 instances of darktable are used on a cloud-hosted file) and avoid file path updates and such. Also, all Krita files stay available on the disk to be opened directly from Krita later, and automatically updated in darktable at next opening.
Also, for now I would be careful to allow multiple synchronisation of pipelines to a single file because I fear the layers synchronisation could backfire in strange ways. I would let users do the compositing manually (importing files as new layers) and use the background image (for example) as a master file upon which the darktable tone mapping + colour conversion will be applied last.
I fear the shared code would be just a couple of lines. IOPs need way more checks than I/O modules.
Given Krita can exist under many forms (system-wise or user-wise), I think it will be best to check for usual
/bin places then let users input the path of the bin if nothing is found. Note that I have no experience of dev with Win/Mac.
Krita has an auto-save feature, so as long as the file keeps changing, darktable will see the hash changing and will update its cache accordingly.
Yeah but I don’t really get where the whole “Gimp is the opensource Photoshop” comes from. The cool features of Gimp essentially duplicate darktable ones, minus the non-destructive part and the batch editing (why on Earth would you work on photographs encoded in 8/16 bits integer with TRC/OETF/gamma on top ?).
So the only thing that Gimp does and darktable doesn’t is the painting job, but at this game, Krita is much more mature, way more polished, but somehow under-advertised. Let’s hope it will change once people discover it. We stand at the dawn of a new workflow, scene-linear, more robust, more consistent and more straight-forward, it’s time to reinvent a consistent and minimalist workflow. Some people out there rely on their camera to make a living, they need a no-nonsense way to work faster.
I have a question on the master layer part. How would you deal with Krita filter masks being applied to the master layer. Filter masks are essentially adjustment layers, but the filter aren’t treated as layers, but more so filtering the pixels within the layer.
The whole .kra file (layers and filters) would be flattened through Krita CLI and sent do darktable as a .pfm or .exr 32 bits float.
Could you show 1 sample (ready) photo of Darktable + Krita? I was curious about your futuristic vision and in fact we will have a more polished workflow
Hm, loading an image itself may be easy, but I am talking about something else. This would be a simple addition to your module but with big impact. Your module will load a cached image and deliver a cached image. What I have in mind is a simple checkbox that tells darktable to not load the underlying image and it tells krita to load the image by its native mechanism. That way you could e.g. load a multi layer tiff as layers, edit it in krita, and run the flat output through darktable for colour grading. Another example would be a film negative workflow: Load the image with alpha layer in krita, use the scratch removal there to in paint the scratches, and pass the flat output to darktable for inverting and further processing. This does not sound too complicated to me, just turning some functionality on or off that is there anyway once your module is there.
I don’t have a ready sample of that and I don’t think a finished product will bring anything to the conversation. I mean good results can be achieved even with bad tools if you master them. The point is to simplify the workflow and increase your chances to get good results in a minimal amount of time.
But that is exactly what I’m speaking of. The whole conversation since the beginning is about dealing with painted layers in Krita, then reinject them into darktable pipeline.
Yes. I mean something else (maybe easier with pictures):
What I understood you were describing
What I think would make sense to add
Edit: I know that this may be achieved by just loading the image in krita directly, but this would break the link to the source image. With my idea, you would be able to copy-paste edits in darktable that include the krita module and have it always operate on the correct source image. Of course this may help only in non-destructive, “automatic” krita documents (or semi-automatic, which means to get a reasonable krita starting point when copy-pasting edits in darktable), but this already helps a lot (e.g. for my negative workflow).
In the lua scripts we just do a which command on linux to find the executable. If we are on mac/windows we have the use specify the executable location.
Ok so I did my first Krita + darktable retouch today. I must say… dodging and burning in linear is much more pleasant, it’s far easier to blend the brush strokes than in displa-referred (as expected… gaussian blur just works in linear).
So, I did the basic contrast editing of this picture in darktable and exported to OpenEXR 32 bits without filmic curve (the pic shown here has filmic on though):
I did the dodging and burning in Krita and saved the file as a .kra, then exported it back to OpenEXR (it seems OpenEXR doesn’t save layers). The dodging is performed by painting white on a layer blended in addition, the burning by painting black on a layer bended in multiplication. I also tried to dodge by painting in black on a layer blended in division (so that falls back to an exposure compensation), but after a while you end up dividing by almost zero and artifacts begin to show (although it gives a much better transition). Here is the flattened stack of D&B layers (notice I removed the underwear marks and scars only with dodging and burning):
Then I imported the Krita export back in darktable as OpenEXR, added some more sharpness and applied filmic:
It’s been a long time since I haven’t done that kind of editing, it makes me very happy, I missed that (it was 6h of work but I’m out of shape and I had to fight the soft a bit).
I admit I do prefer the original, and the original feels and looks a lot more natural.
I can see a smooth transition.
(it seems OpenEXR doesn’t save layers)
At least Blender is able to save EXR files with layers and Affinity Photo is able to open this files:
I’ve just tested with a file from this Natron bug and Krita is able to open EXR files with layers.
I tried OpenEXR with layers, it can work but the alpha channel is completely messed-up when you reopen it.
I can’t confirm this. I tested with Krita 4.2.8. Could you perhaps post your .kra file?
I have no experience with Krita, but potentially it sounds great. I’m thinking in particular of painting over small specular reflections that push into clipping… I have trouble dealing with these via the dt masking system. That may just be my inexperience, but it was relatively easy to over-paint these in LR.
No, it happens when using OpenEXR in and out, trying to save the layers, then re-open it. I was told it’s because Krita doesn’t use premultiplied alpha (seriously guys ?).