I was going to refrain from further posting 'till after the topics were sorted out, but I ran across this, which is pertinent I think to all the aspects of this thread:
(License: CC BY-NC 2.5)
I was going to refrain from further posting 'till after the topics were sorted out, but I ran across this, which is pertinent I think to all the aspects of this thread:
(License: CC BY-NC 2.5)
@ggbutcher To me, it is more about what I hinted at in another thread. People doing stuff with other people but not talking about it in their report. I have linked to at least one doc where there were blanket or vague statements like āwe talked about itā. That and industry isnāt willing to share; e.g., Adobe and others are involved. They would share if you gave them money, or used their patents and gave them money for them. I guess that is a part of their job and business. Or they are just people who are above or too busy to engage with us, the common folk.
actually, I thought you were going to post thisā¦
Played arround with Natron a bit and the rawtoaces utility, the biggest issue I ran into is that since the rawtoaces tool is a command line tool that spits out EXR files it is hard to check if the inputs are actually correct/useful. Although since it doesnāt really throw any data away so it is possible to recover from this, it does make the whole process quite a bit harder.
Anyway some results
Scene graph of above
The loop in the scene graph is used to isolate the eyes and give them a bit more emphasis
(note from the OCIO CDL node you can export the CDL which then can be used as a look in other OCIO compliant SW, if configured correctly)
Second example
With the scene graph
This time isolated the sky
All needed files to reproduce: https://drive.google.com/open?id=1dHmcxXVTIPdvEjwUZjHAeG8oMiNJ6kCQ
For this install Natron and download ACES 1.0.3 and configure Natron to use that config, for the DNG to EXR conversion download the rawtoaces tool
Above is licensed CC BY-NC 2.5
Someday someone has to write a tutorial on this or similar workflow. Maybe it already exists. Just lazy or ignorant.
If I have this right, you:
Yes that is right, of course you donāt have to stick to OCIO nodes, so long as the node keeps the scene referred data intact it should work (so for example no inverse on color data, alpha maybe (depends on if it is pre-multiplied or not), masks are ok to invert)
Very interesting exercise!
@dutch_wolf @Elle @gwgill, and others, I have a very basic question concerning how to prepare images for HDR displays. IN out ānormalā workflow we adapt the output such that "0"is mapped to black and ā1ā is mapped to the display white point, with the middle gray mapped to 0.18 in linear encoding. The display has a maximum brightness of 100 nits or so.
What happens when the display is capable of generating a brightness level 10x bigger? I suppose that the black level does not increase by 10x as well, otherwise it would not be HDR but simply brighter, right?
To which brightness levels should one map the ā0.18ā and ā1ā values in this case?
Sorry donāt have the answer to that, probably should look at the encoding specs (maybe take a look at the standard for HDR10?)
Anyway I think this is how I want a photography workflow to look like
Undecided about the exact color spaces to use, although either ACES2065-4 or ACEScg should be workable.
Also this hypotetical RAW editor will also be usable for an ACES workflow, just disable the user adjustable tonemap operator and load an ACES OCIO config, that would look like this:
I faintly recall someone (@age?) briefly talking about this or the like in one of the many threads; maybe:
Rendering also has a specific meaning in color management, in relation to color appearance/viewing environment adjustment, and/or adjusting for device gamut limitation.
A (typically input referred) image is rendered to an output device space.
To 0.18 and 1.0 in scene referred space
The convention employed by OpenEXR is to determine a middle gray object, and assign it the photographic 18% gray value, or 0.18 in the floating point scheme.
[Technical Introduction to OpenEXR PDFwww.openexr.com āŗ documentation āŗ Teā¦](http://Technical Introduction to OpenEXR PDFwww.openexr.com āŗ documentation āŗ Teā¦)
ACES is scaled such that a perfect reflecting diffuser under a particular illuminant produce ACES values of 1.0
https://acescentral.com/t/supported-compression-for-aces-exr-container/391/5
@age - I looked at both of those links. I donāt think either really has the correct equation for the TRC, only information for putting the equation together. Either way, the step from āhereās stuff that can be used to make the equation to put into a spreadsheetā to āhereās the actual equation to put into the spreadsheetā is a step Iām not prepared to take on my own . Iām going to send out a couple of emails to ask for some guidance, and in the meantime maybe someone on this list might have or can find the actual equation? It probably starts with āY=ā and probably has a value that indicates how to modify the PQ equation to incorporate the nits value. Please donāt assume the previous sentence means I know what Iām talking about! Or maybe the previous sentence makes it obvious that I donāt know what Iām talking about
@dutch_wolf - thanks! for the Natron files and for the explanations of various terms. I downloaded Natron from git - which branch should be compiled? Also I found a link on github - probably already given earlier in this long thread - for the ACES OCIO configurations, is this also needed? Because of other time commitments it will be maybe a week before I can find the time to work through your examples but Iām looking forward to seeing other peopleās responses to/results from working with Natron/OCIO/ACES.
As an aside, I have a very high dynamic range EXR file if anyone might find it useful - itās not very pretty! But it might be nice to see what results are for both low, āmiddle/extendedā and very high dynamic range images, so I can post the file if anyone is interested.
Currently use the flatpak natron package myself which AFAICT is reasonable up to date, for the OCIO configs I just cloned from https://github.com/ampas/OpenColorIO-Configs and the used v1.0.3.
Do note that the Reader and Writer nodes are fully OCIO color managed but the view node isnāt, so before the view node there needs to be an OCIO display node and the viewer needs to be set to linear
(Viewer setting highlighted in red, example is actually setup to use Filmic Blender but the general principle stays the same)
Iām no expert but from what Iāve read so far, it seems that:
Thereās mention in the primer that ACES2065-1 can cause problems with software leading to distortion when used as an internal colour space.
I think the normalization of the exposure for the scene referred space is very important.
For example Iāve taken two shots, the one on the left is exposed for mid-gray and the other one for the highlights
The overall brightness on the left is similat to what my eyes have seen in real life
The convention
employed by OpenEXR is to determine a middle gray object, and assign it the photographic 18% gray
value, or 0.18 in the floating point scheme. Other pixel values can be easily determined from there (a
stop brighter is 0.36, another stop is 0.72)
You are right so for the internal workspace I am thinking ACEScg and digital intermediates in ACES2065-1[1], this is similar to full ACES, the biggest change would be to not use the ACES ODTs since as a photographer I want to be more flexible in my Render Transform (so not using RRT all the time)
For communicating the LUT generated by the processors to downstream program I would just use environment variables as described in the OCIO documentation (see: http://opencolorio.org/userguide/looks.html#userguide-looks)
[1] Keep typing ACES2065-4 for some reason but that is ACES EXR container that uses ACES2065-1
From here:
https://www.smpte.org/sites/default/files/section-files/HDR.pdf
With HDR PQ, there is no agreed upon diffuse white point level. Many are using 100-200 nits as the diffuse white point level, the old 90% reflectance point (100 IRE). Camera operator or colorist/editor must also know what reference monitor will used for grading the content. For example, if a 1000 nit monitor is used for grading, with a diffused white point of 100 nits, white is set at 51% for SMPTE ST 2084 (1K). If a 2000 nit monitor is used, diffuse white is set at 68 %.
Which Iām assuming leaves room for the specular highlights and ābrighter than diffuse whiteā. But again, HDR displays are not something I know anything at all about. When I asked for a multiplier for the equation for the TRC for the profile for an HDR10 monitor, Iām guessing that the multiplier has to do with where diffuse white is set. But this is just a guess.
@Carmelo_DrRaw @Elle I read through lots of PDFs today. I didnāt keep track of them but here is one that seems to address much of it: https://www.itu.int/dms_pub/itu-r/opb/rep/R-REP-BT.2390-1-2016-PDF-E.pdf, which I only skimmed just now, sorry. However, the obsessive reading spree was on my small-screened and half-touch-broken phone, while busy doing something else. I might have gotten things mixed up because of that and the fact that there is so much info out there, but here are some points that might be relevant. Again, I am speaking in non-technical possibly vague terms. My purpose in threads like these is to brainstorm and / or provide sanity to a very complex subject. I will leave the technical efforts and battles to the rest of you.
1. There are two standard transfer functions called Perceptual Quantizer (PQ) and Hybrid Log-Gamma (HLG). Each has its strengths and weaknesses. Briefly, PQ is an absolute display-referred signal; HLG is a relative scene-referred signal. The former needs metadata and isnāt backwards compatible with SDR displays; the latter is. Depending on the rest of the specs, esp. for PQ, there is a preferable type of HDR display and surround. A common measure is the amount of nits.
Looking into this would probably answer @Carmelo_DrRawās question. Many documents show what happens on various displays and surrounds combinations. Pretty graphs and descriptions. Makes me want to root for one or the other as if it were a competition. (I am leaning toward HLG
).
2. Next we have @Elleās link and comments.
Stupid Github, now, wonāt let me search from its front page without logging in. MSās handiwork?
These commits and their comments show how variable the standards could be. There were PDFs talking about the choices made by various entities, workflows and devices. The discussion varies depending on the perspectives of the document or slideshow publisher but you kind of get a gist of the common themes are among the infographs, tables and figures.
@Elleās particular linked document HDR.pdf gives examples in the form of waveforms, which is very helpful from our perspective. Photographers tend to use the histogram (JPG); videographers use waveforms (and other scopes) to quickly gauge where the DR, among other things, is. As you look at the images, to me at least, it is easy to understand why āthere is no agreed upon diffuse white point levelā. It has to do with a lot of things, a few which I will briefly list in the next paragraph.
Just as we need to make decisions when we look at the cameraās histogram (generally generated by the preview JPG, not the raw!), the videographer has to look at the scopes to determine and decide on the DR and the distribution of tones, among other things. Choices need to be made (edit: and we need to consider leaving some data and perceptual headroom too). Hopefully consistent ones per batch or project. These decisions are based on a number of factors including personal experience and tastes; client and product expectations; workflow; and ultimate output and viewing conditions. There is a lot to be said about point #2 but I have to rest after a tough day!
Itās definitively the second link http://www.streamingmedia.com/Downloads/NetflixP32020.pdf
Linear to st2084 10000nits
Linear to st2084 1000nits
Tested in Vapoursynth https://github.com/vapoursynth/vapoursynth where
c=core.resize.Bicubic(clip=c, format=vs.RGBS, transfer_in_s="linear", transfer_s="st2084", nominal_luminance=1000)
is equivalent to this in polish standard notation
c = core.std.Expr(c, expr=" 0.8359375 x 10 / 0.1593017578125 pow 18.8515625 * + 1 18.6875 x 10 / 0.1593017578125 pow * + / 78.84375 pow ",format=vs.RGBS)