New Sigmoid Scene to Display mapping

Oh thanks, I’ve been looking for terms to describe separate roles of filmic and other such curves from the rendition transform. ACES to the rescue…

Regarding “output referred”, to my thinking anything that departs from linear-light is not linear anymore. ANY tone curve that is not straight-line from 0,0 to 255,255 (All the pendanticists, apologies for the 8-bit description) is not light-linear… The ACES workflow recognizes the need for this in the separate specification of the RRT and the ODT, no?

Right now, I have the original form committed to the dev branch of rawproc, so it will be in the upcoming version 1.1 (just waiting for libraw to roll out their impending version). I’ll try this in a dev branch, adding a third parameter, w.

Right now, I’m messing with images of various DR and the two-parameter doublelogistic, works really well in a lot of cases. Still have the need to revert to other means for images that are predominantly dark but contain a relatively high-intensity patch, such as sunsets, but those really border on the need for multiple exposure captures and HDR stacking anyway…

Doesn’t the “linear-light” refer to what the values of the pixels mean? And in linear-light, twice the value represents twice the light energy. Any operation that respects that relation operates in linear-light (e.g. tone equaliser, or exposure correction with masking), even though relations between tones can change.

After the transform to display-referred, that linear relation between pixel value and energy doesn’t exist anymore.

According to your definition, all you could do in the light-linear part would be multiplication and addition operations (exposure and offset, respectively), eventually on a per channel basis. So, no tone equaliser (which is a tone curve operation), and no masking/blending operations.

If I understand the ACES definitions of RRT and ODT correctly, the RRT transforms the data from linear to display-referred in a reference colour space, while the ODT transforms the result from the RRT to something suitable for a given device. That means that a given RRT can be used for all devices, whereas the ODT is specific for one particular device.

You’re welcome! I think it’s a discussion to be had quite thoroughly. The photo world has quite different output formats to accomodate, from low contrast prints to in the future HDR monitors.

I am not 100% sure where ACES actually draws the line as in some info graphics they put the grading of images into scene-referred linear light and the look-modification-transform(LMT) into display referred sections. :man_shrugging: They obviously have a fine grained idea which creative decisions live in which part of the workflow.

one of the ACES workflow schemes:
Camera Data(not necessarily raw)
:arrow_right: Input Device Transform (IDT, transforms to linear-light in ACES colorspace, saved as OpenEXR half float sequence)
:arrow_right: color grading, compositing, vfx
:arrow_right: Look Modification Transform (LMT, view transform but still in scene-linear?)
:arrow_right: Reference Rendering Transform (RRT, transforms scene-referred to large gamut high dynamic range output space)
:arrow_right: Output Device Transform (ODT, specific to intended viewing device, rec709, dci-p3, rec2020, rec2100, etc. )

technically this should lead to the case that for a different medium you do not have to regrade your master…but of course it can be a creative decision to do so…maybe that’s why the LMT exists…not sure, would have to read a bit more about it. It’s not trivial for the movie industry to go from scene-referred to display-referred. Also quite cool to see that they explicitly want to be able to ingest everything, not just raw-sensor data.

EDIT: I took so long to type the above that @rvietor was faster!
EDIT2: An LMT is explicitly a LUT where for some reason the color-grade is not. Maybe because vfx produces a ‘new’ exr sequence from the IDT transformed original EXRsequences and grades should behave the same? I am unclear about this.

Yes. Says, Bear-Of-Little-Brain… :bear:

Seeing that workflow reminds me of a question that has been in the back of my mind since the topic first showed up on pixls.

How relevant is this to photography?

The reason I’m asking is that photography has some different characteristics from video and a very different delivery system. The “output devices” are insanely varied from hdr displays via magazine to newsprint. So far the theory of this being done automatically by characterisation has never happened? There are still skilled people involved with transforming the file delivered by the photographer to something suitable for the media. This is rarely done automatically with high end work. It seems it’s just not possible yet?

1 Like

A fundamental consideration of the ACES workflow is the consistent integration of content from a variety of sources, including still photographs. That happens more in the input side, where all sources converge in a consistent colorspace.

Yes, but as I understand it this is because video typically comes from many sources and is cut to form a single whole. The photographs in a book for instance, or any other serial presentation, have much less strict demands even if some of the same issues apply.

Perhaps more importantly you have continously changing light conditions etc. so the looks and other edits have to be automatic . You’d rather not edit every single frame as a photo but need looks maintainted smoothly over transitions. Photography being made up of discrete parts has been handled one at a time . Typically consistency is slightly sacrificed for achieving the best in each photo. I don’t dispute consistency being important for most photographers but I can’t be perceived with the fidelity of film so the indivudual photo has some precedence.

I don’t know anything about aces except what I’ve read here but if what you are saying is correct it’s perhaps geared towards photographs as assets in larger workflows such as computer games etc?

If it’s art the photographer will never* accept that the printer will do a good job or that it can be automated. Size, paper space where it will be displayed may influence the look creatively. You don’t want the exact same look at a5 size as 2m on a gallery wall. If it’s a magazine they will tweak it slightly for their paper and the look they want, at best in conversation with the photographer.

So it just seems practical and the way things are to edit per output. Sometimes the latter edits are done from tiffs or somesuch that have the general look baked in.

/ * false

1 Like

Honestly I am not sure. I see the need to deliver to the insanely varied output devices you mention and look for how other people deal with this. I agree that consistency in movies needs to be higher than in photography, yet having that same consistency certainly would not hurt. Especially so if FOSS raw-editor-devs need to soon have conversations with wayland-devs for example.

Also, wrt this topic, the ‘new way’ is rightfully so to distinguish somehow between scene-referred and display-referred editing. If you acknowledge that separation and look at what ACES does, and look at the increasing amount of output device needs, I think a discussion about where certain creative decisions should live and where probably not, is straightforward. A lot of resistance regarding log-logistic is founded and argued with a ‘this is the old way’-logic where even the ‘new way’-logic conflates look, grading and output device transform to a certain degree.

I agree.

I have had discussions with people who printed my work…
I think that’s the perfect point to define an interface akin to ACES. If a printer has their opinions on how a print should look, you as a photographer need to point to a neutral reference. Instead of just showing it on a sRGB monitor, you could then also supply a DCI-P3 reference on your mobile phone or a HDR-reference on a TV. If the printshop still wants to do their own interpretation, they of course can like they always could.

I agree. Those edits will not necessarily benefit from large-gamut hdr outputs then. Not a bad thing. But then it’s even more strange why log-logistic gets argued against, no?

EDIT: another argument: why is the photography world still stuck with sRGB jpegs? The movie industry moved for consumers from rec.601 to rec.709 to rec.2100 in the meantime. Maybe this is because there was no infrastructure to move processes forward on? So an ACES like infrastructure could enable us to finally move to better formats, color-spaces and display media. (sometimes I am a bit idealistic…:smiley: )

Well that is sort of the nub of the matter. In which workflows are the separation scene, look, display actually a thing. The scene part seems particularly geared toward mixing of input? The look and display edits/transforms are usually one. And they are one because it’s in practice quite rare to want the same look for different display media or even situations.

But it could also be the AD or someone like that who is responsible for look of the publication. They won’t be doing a complete reinterpretation of the photograph but might tweak it slightly in such a way that it’s both a look and display edit based on the look of the photographer edit.

The arguments for having the math in the right place is fine but that could be separate from any workflow issues, just something the software does.

As someone who recently posted a great many posts about tagging, xmp and standards I’m all for producing work that can fit well into larger contexts. It’s just been this nagging feeling that much of the colour stuff discussed on pixls are born out from video conversations and that video actually has a very different worflow, infrastructure and delivery from photography. Certainly it’s great to learn from the more developed peers but are the fundamental differences underestimated? Further which photographer workflows and delivery demands benefit from the aces principles. The theory is sound but will there ever be practical benefits to the conceptual compartmentalisation. Or are edits just edits in photography.

Well from my perspective the same applies already to my outputs, although at the moment only sRGB and print variations. Why? Because after seeing wide-gamut HDR, I want this for my photos. At the same time, to make this happen, a lot of the infrastructure has to change or at least taken into account. I want a wide-gamut HDR monitor at some point but still deliver to whatever printer-colorspace there is. Ideally without the need to regrade a set of pics.
Is it an imminent problem to be solved: no, because the photoworld is strangely slow in adapting to newer and ‘better’ output devices (except for when they abandoned analog film :angry: ). Will I retire my sRGB-display-referred look creation tools: probably not…as there is nothing to be seen on the horizon for a potential wide-gamut HDR future.

While I can see that this is done in practice, I wouldn’t be a fan of it. It muddies the waters a bit. Then everyone starts to play the blame game why a photograph looked sub par in a publication. Was it the AD, or the photographer? But a regrade for output…if the medium can justify it, sure. I think this can also be done with an aces workflow, you just loose some benefits. If you can justify a ‘creative ODT’ sure…I’d like to know from a real colorist though what he thinks about it. The compartmentalisation in movies gives room for people to specialize in stuff (colorist, the job!). If the AD or the print publication would credit who does what…that’d be great. Image/Color:PhotoPhysicsGuy, Retouch: nosle, Printing/finalization:HighGlossMagazine…you catch my drift. Photography is often more of a one-man-band thing.

It could certainly be influenced by video! I can only say that wide-gamut HDR video looks amazing. The last time I saw something this pretty was from a Velvia50 6x6 slide projection of a picture of a sunset. Deep rich colors and contrast which make you forget it’s a reproduction.

Possibly. I can’t rule it out. Which directly ties into your next questions:

I think a high complexity aces-like workflow would be attractive for working 'pro’s or the tech-interested. Delivery to print, sRGB and HDR at the same time all derived from the same output-referred master. Also: the adoption of HDR screens will need software which can handle at least two outputs HDR and SDR. You don’t want to switch to the HDR-darktable branch and not be able to output SDR anymore because of a lack of colormanagment in your App, your OS or god forbid both.

And yes, edits are just edits, an afterthought of how to make a good picture a tiny bit better. A scene-referred aces-like workflow will not improve ones photography skills.

With regard to tone, if you were to take a raw image, convert to linear TIFF, and display it on a calibrated setup, you’ll likely find it’s too dark, dull, or both. What’s missing is the base curve, filmic curve, or whatever curve you decide to use to lift the image out of light-linear to whatever the rendition medium takes as input. The purpose of the tone curve we use in raw processing is to bridge that gap, but not go so far as to handle the rendition transform.

I’ve run across a few well-exposed images which didn’t require a tone curve; the display transform did the lift to perception in these cases quite nicely.

Regarding ACES, I think studying it is instructive as it specifies a different way of handling the same exact problem we have to handle in single-image photography: wrangling camera response through raw processing to rendition gamut and tone. They’ve just included a few intermediate steps to accommodate the large number of people who have to do specific things along the way…

As your favourite armchair commentator, I would say that everyone’s transform needs are different. If you are Lucasfilm, and have a staff and contracts of thousands, then you may need more than mentioned. :stuck_out_tongue: It also depends on how many stages and containers your data will go through. I have argued with someone here about the ACES container, the colour space or something to that effect. I think that is where the RRT and related transforms have relevance. But for most of us, there is no need; in fact, LMT isn’t even used in our photography. We are only starting to dabble in colour grading, compositing, etc.

1 Like

It’s not just for teamsize reasons though, is it? It’s also for separating creative intent from technical necessities like gamut compression and dynamic range compression. While there is some creative leeway in how the technical things are done, there is good reason to separate the technical from the creative.
This is where it’s really instructive for the case we talk in this thread about and how it relates to other modules which do something similar.
Every argument for or against log-logistic should be checked for a workflow and pipeline requirement. For this, the pipeline has to be defined and with that how it’s intended use is. Apart from a separation into scene-referred and display-referred, I don’t even know where to read what is supposed to be done in which part of the pipeline. And yet the pipeline is brought up as an argument for or against log-logistic.

What does the filmic module do for you? Isn’t that trying to create a certain look?

1 Like

The dt filmic module, in its current from, isn’t an LMT. To me, it is a Swiss army knife. The following chart is just an example workflow. In your opinion, where would dt’s filmic be situated?

At least in the DI Grade node, as highlight- and shadow-contrast and desaturation are supposed to be adjustable to taste (choosing hard or soft knee, midtone saturation, etc.). I have a hard time seeing control over midtone contrast as something that should live in an ouptut transform but maybe (density AND contrast control for neg-film and pos-paper response). How much of highlights and shadows should be mapped back to display space is imho part of an output transform (Dmin and Dmax and paper latitude I think), but should be rather independent of other decisions. Filmic does all of this because it tries to mimic a process of film-negative capture and output to paper-print.

So for me it does a lot of different things that in aces are distributed to different points in the pipeline, so I agree that is a swiss army knife, it’s a one-stop-shop for certain looks.

log-logistic+skew mostly does a good looking output transform for my taste. If shadow- and highlight-contrast-control is somehow implemented it already would step more into the realm of the ‘DI grading’ node imho.

@nosle, I fully understand your concern, and especially with many of the new scene-referred modules being quite technical. I could also slap back with quotes like “video is just many still pictures”. But hear me out, the references to ACES done in this thread are not about implementing ACES into darktable but to learn what it does. ACES is after all an open color-pipeline definition used by a large successful industry. It’s really really nice to be able to dig into their source code and compare what darktable does against their test images! I have been comparing this log-logistic sigmoid against the tone curves used in ACES as a way of checking if the results are good or not. Not more than that.

@afre, I took the liberty to flesh out that diagram with my current knowledge. (Crossing my fingers that I haven’t made any big mistakes!) The pipelines go from left to right and the columns are made such that the boxes correspond to the same operation.

  1. Don’t bother too much about the stuff inside the orange box, this is related to creating images and how you should know you are doing OK. Let’s just simply that as our RAW input image (or why not a CGI image? Same-same!)

  2. The DI grade in ACES should be quite equivalent to the scene-referred modules in darktable, i.e. modules up until filmic / (base curve).

  3. The difference between ACES begins with the Output transform, also is known as Reference Rendering Transform + Output Device Transform, RRT + ODT, or RRTODT. The RRT transforms the scene-referred data to a 10000 nits ideal reference display range.

  4. This dynamic range can then either be trimmed or further compressed using another transform in the ODT such that the dynamic range fits the target display device. The ODT is also responsible for format encoding such as sRGB gamma for more efficient quantization. It should be trivial to add glossy and matt paper ODTs within the same framework.

  5. Compare that with darktable and it’s pretty clear that the filmic module is the same as RRT plus ODT but without any format-specific gamma stuff. This is why I have inverted the sRGB gamma and HLG function in the plots over at the Tone Curve Explorer. Note that filmic is just one method for doing this “display transform”/“device transform”, log-logistic is another alternative, the base curve could easily be updated to fulfill the mathematical requirements for the future and it would be possible to add the scene to x nits ACES LUTS without any pipeline modifications to darktable. Note that paper prints also count as displays, they just reflecting light instead of emitting light!

  6. Any displayed referred edits should be done after this step and can be done, those edits will only be valid for that display target but there is nothing mathematical stopping darktable to support scene-referred and display-referred editing as part of the same pipeline. I will be using scene-referred edits as much as I can but I really like color zones in display-referred space and until there is a replacement…

  7. There is finally the possibility to edit your image in format space, i.e. directly on the sRGB values, HSV being the most common example of why this is a terrible idea. Don’t know if there exist any modules at this point in the pipeline in darktable, but I hope it’s not about color editing!

@rvietor and @ggbutcher Linear light! Only bother about the space we are using, linear light meaning double the intensity = double the value. Not perceptually twice as bright. Display space can also be linear but it is limited to a displayable range. Screen: [Min brightness, Max brightness] or print: [Min reflectance, Max reflectance].

@PhotoPhysicsGuy Looks like we are disagreeing on what the equivalent of filmic / log-logistic operation in darktable is in ACES, how interesting! I hope the infographic maybe makes it clearer what the ACES plots are in the tool at least. They are the RRT, and the ODT up to the point of “Actual x nits display”, I marked this with a range saying “Topic of this thread”.

A note about the purple box, the equivalent of this would be to generate custom picture profiles/styles for your camera based on a darktable style. Would be a super nice way to get OOC jpegs closer to your actual workflow, improving your chimping experience!

And finally, it’s definitely time to update that PR and post a bunch of pictures, more than 100 long and theoretical posts is pretty heavy reading.

2 Likes

:+1: As I said, I like pretty pictures and numbers. They help clarify, even for the author him / herself, complex ideas, and welcome people who aren’t familiar with them. ACES is fresh to many in this forum.

2 Likes

Indeed interesting! I think this is maybe due to the fact that putting Filmic, log-logistic AND good old base-curve in the same ‘Display Transform’ category is not as fine grained as the aces chart would allow. But this is still a very helpful graphic for the discussion.

I am open to a lot of things! For example to use log-logistic as a ‘darktable RRT’ followed by the BT.2390 EETF for mapping from output-reference to display, or for both, or just as a single module in the pipe. Separating tasks makes sense to me.

1 Like

I mostly put it in there the way darktable is doing it atm and where the equivalent step is in ACES.
My gut feeling is that this transform should be kept as one module that takes scene referred data and spits out display referred data. It would make sense to me to make that module consistent with the darktable input transform (camera profile scene referred data to workprofile scene referred data) and output transform (display referred data to format referred data).

That would make filmic one of possibly multiple available methods. Base curve could fit nicely as a RRT in a ACES like workflow with EETFs for mapping to lower dynamic range displays. Log-logisitc would in contrast suffer from being used together with EETFs as those most likely would mess up the nice smoothness properties of the log-logistic curve. Filmic is probably also better off defining the white and black target directly without EETFs. And that is fine if you ask me, both approaches should be able live side by side. Assuming we do not what a forth possible space for color edting, i.e. the step in between RRT and EETF! (Bad idea if you ask me. I don’t know of any reasons for doing that and users are already struggle with keeping scene and display referred apart.)

I think one key for a good user experience would be to define the white and black targets in the output transform and let it be defined by the output, HLG 1000nits, SDR 100nits, matt paper, glossy paper etc. Paper prints could maybe be nicely handled by supporting printer icc-profiles as output display definition! A very technical user could be given the option to define the target display as custom settings but that would probably be a very infrequent usecase.

Note that this is just my current feeling for how I had liked it to be, its not like I have any authority in making decisions about this.

@afre I’m working on those pictures :sweat_smile: