Ideas and brainstorming about usability improvements

I am back to PhF business after a short break, and I would like to share with you some recent thoughts about how the processing workflow can be improved, but also collect suggestion that are currently scattered among several threads and difficult to track/remember.

First of all, some ideas about how the input to the various tools is currently handled, and how this can be improved/simplified.

At the beginning of PhF development, each module could only take the input from the closest underlying non-hidden layer. IN order to provide more flexibility I introduced the clone layer tool, which allowed to insert a copy of a layer anywhere in the pipeline. In addition, the clone tool allowed to select a specific channel from the copied layer.

However, in the current version each module allows to directly select the input layer, and therefore the clone layer tool is not strictly needed anymore:
35

Hence, I propose to keep the clone layer tool but rename it as channel selector, because that is now itā€™s main purpose.

Another issue with the current workflow is the lack of a ā€œdefault inputā€ for mask layers. For example, if one opens an empty layer mask and adds a curves layer, nothing happens. This is simply because the mask is initially empty, and therefore the curves tool receives a NULL image and returns a NULL image.

This is at best highly non-intuitive, and needs to be fixed. My idea would be that in such case the curves tool would get the input from the previous non-mask layer, converted to RGB luminance and encoded with the Lab L curve (so that mid-gray corresponds to 0.5).

As an example, consider this layer arrangement:

    layer2
    layer1

layer2 takes its input from layer1. Now suppose one adds a curves tool to the initially empty mask of layer2. In my proposed scenario, the curves tool would in this case receive the output of layer1, converted to L- encoded luminance.
Does this make sense to you?

More generally, I want to introduce the concept of ā€œdefaultā€ input for a given tool. For non-mask layers, the ā€œdefaultā€ input corresponds to the closest non-hidden underlying layer.
For mask layers, the ā€œdefaultā€ input corresponds to the closest non-hidden underlying layer in the mask or, if the layer is the first one in the mask, to the solution shown aboveā€¦

By the way, here is a preview of the new tools interface I am working on, where the handling of the input and blend are grouped into a different tab:
01

The discussion is open for suggestions!!!

Makes sense. Still takes too many words to explain, meaning that it isnā€™t intuitive. What would make it better is train the user to take labeling seriously (I can think of several strategies but will let you think about it first) and make labeling clear and unique by default. E.g., input source as ā€œpreviousā€ is not specific enough; I would prefer it to be the actual sourceā€™s label.

1 Like

I think making the mask input default to the L channel of the previous layer is a good idea - its what I would usually do anyway unless I was using a path or gradient mask.

As far as labelling the source goes, I think that the label should include ā€˜previousā€™ or similar to make it clear that the source layer will change if you reorder the layers (or add a new one), whereas if you specify a specific source layer it wonā€™t.

@afre @paulmiller

I was considering to use the word ā€œdefaultā€ instead of ā€œpreviousā€, because it might not always correspond to the previous layerā€¦ I will also try to add the actual name of the default layer, if possible, something like

default ("layer name")

default works. Showing the name of the actual source layer is good too.

.pfi files:
Currently the source image for the Raw Loader is saved as an absolute path. I think it would be helpful if this was a relative path instead (relative to the .pfi file) - that way you could move the raw and pfi files together (e.g. with Play Raws) without the path becoming invalid.

1 Like

That is actually already implemented. If the input file is not found in the provided absolute path, it is searched in the same folder as the .pfi file.

If that does not work for you, then itā€™s a bugā€¦

The only issue I have at the moment is when files are exchanged between *nix and Windows systems, in which case the absolute paths are not interpreted correctlyā€¦ I am working on a proper fix.

Sorry its taken so long to replyā€¦

Iā€™ve done some testing on macOS:
.pfi files will find the RAW file if it is in the same directory.
.pfi files with a Windows style path wonā€™t open on macOS unless I manually change the path to a POSIX style one

1 Like

This may be an idea too far:

The processing model for VIPS is a graph, not a linear pipeline. You can build a graph structure in photoflow with the use of clone layers to join branches together, but it gets confusing fast.
How about a node / graph based user interface instead of a stack of layers?

Also, it would be nice to have more than one input to a node (for example, a Guided Filter where you could specify the image and the guide from diffferent sources).

What I really want is an open source version of Quartz Composerā€¦

Thatā€™s something I have in the back of my mind since long time. You are right, photoflow is really a node-based editor, and layers are just a simplified representation of the pipeline that works best when the processing is ā€œalmost linearā€.

However, I have no time to code a node graph view from scratch. I have been looking for some existing UI widgets that allow to represent and display node graphs, but I have not found anything so far. If you have some ideas, I would be very much interested!

Adding a node graph view would make photoflow much more unique in the image editors family, so it make a lot of sense to put some effort into itā€¦

This is actually supported by the underlying pipeline, but not exposed in the UI. The guided filer is a good example of a filter where this would be helpful. I think I will include some work in this direction in the simplified-pipeline branch, where I am anyhow re-arranging the way node inputs are selected, and how they are handled in the UI.

Isnā€™t this what Natron does? I havenā€™t looked at their code, but the GUI organization might be a basis for a node-based PhotoFlowā€¦

Natron uses QT and Python, and photoflow uses GTK.

I would appreciate a graph based view as well. It would make so much more sense to me as I am far from a linear thinker.

Iā€™m probably going to re-write rawproc from wxWidgets to Qt in the coming year, probably with the same gImage image class. The processing architecture and the GUI donā€™t have to be too closely coupledā€¦

Certainly they donā€™t have to be closely coupled, I totally agree! But the representation of the nodes in Natron is in QT, which means it is not directly usable in GTK.

Hello @ggbutcher

Iā€™m probably going to re-write rawproc from wxWidgets to Qt in the coming year

Wow. I am really interested in your decision about moving your software to Qt!

Why not opt for the GTK toolkit, for instance (such as Gimp, Darktable, RawTherapee, Photoflow etc)?

As for me I only know a bit of Python (no C++, Java etc).
For my very little needs, as a ā€œprogrammerā€, I have always preferred the QT toolkit instead of the out of the box TK (toolkit Tkinter) libraries available with Python (there is no match IMHO, since QT wins hands down).

Most of the softwares I work with leverage the QT libraries (Krita, Shotcut, QGIS, Freecad-Qcad etc) and I have always read very good reviews about it. Especially, if you are going to aim in the future the Android system (thanks to the additional QML toolkit).

Thanks a lot in advance for any insight about your decision :slight_smile:

Filmulator by @CarVac is written using QT if Iā€™m not mistaken, if you want some QT code to peruse.

Hello everyone

As regards Qt, there is also Olive video editor, if you are interested [1]. It is actively developed on github and open source.
Right now, there is also an upcoming feature [2] consisting in Node-based structure/effects, which, I surmise, it is inspired by Natron or Blender. IMHO, this might be a game changer compared to other open source editors such as Shotcut - Kdenlive (Qt as well).

Maybe I am totally wrong, but from my personal perception, it does look like most of the new graphical softwares are developed through the Qt toolkit. Gā€™MIC was eves recoded from scratch from GTK to the present Qt version.
In the past I have even read that even Andrea Ferrero, for PhotoFlow, was considering moving to Qt :slight_smile:

[1] https://www.olivevideoeditor.org/
[2] https://www.patreon.com/posts/olive-roadmap-26853206

Also Luminance HDR by @fcomida

Remember that thereā€™s a distinction between Qt Widgets and Qt Quick.

Filmulator uses Qt Quick.

1 Like