Doest it make sense to process a .JPEG camera file under scene-referred paradigm?

Thanks, @Entropy512.

After installing ‘imagecodecs’ and some other python dependencies, I got my ICC profile. It’s curious that Darktable shows the image much darker than the one with ‘sRGB (e.g. JPG)’ input profile. Maybe it is due to some issues I had to solve to finish the process.

For your information I had to solve these issues:

  • In fs_gradient_capture.py, I had to add a comment to lines 63 and 64 to inactive them since I got errors.
  • Some images generated by fs_gradient_capture.py were not recognized as JPG files and make robertson_process.py program crash (other graphics viewers fails too) . It was necessary to remove those files in order to run robertson_process.py successfully.

It has been an interesting exercise, although I’m afraid that I will achieve better visual output with DTs’ ‘sRGB (e.g. JPG)’.

I think I mentioned, the gradient capture is fairly hardcoded to Sony for now, so probably needs some work to make it more flexible with other cameras. The lines in question set the camera to capture internally instead of to SD card. I’m guessing the capture target needs to be set differently for Canon cameras. Something to try is to comment out lines 28-30, and replace int(height/4) with height in lines 25-27 - if you look at the issues list, I think that the solid-colored bands may potentially cause gamut clipping that leads to errors in the response curve. After further thinking, for just pure curve extraction, a solid white gradient is best.

Also, I did mention that I’ve found that LuminanceHDR does a MUCH better job than my own script at extracting a sane response curve. If you can provide the .npy file you got from the calibration I can look and see if it might have been problematic. You’ll notice that my ICC converter attempts to “clean up” the response curve, but yours may have issues not present in the curves I’ve seen so far.

“it looks much darker” sounds to me like in addition to 255 mapping to an unusually high luminance, a lower value may also have. You’ll notice that I’ve got workarounds for 255 being bogus, but if lower values have errors, that’ll throw things off. Again, LuminanceHDR’s response recovery seems to do MUCH better here.

While I’ve previously believed that pre-compressed jpegs with applied tone curves had very little to do under a scene-referred workflow with filmic rgb module, does the info above, for a simple mind, rather mean:

  • The discarded part of the raw exposure range is obviously anyhow lost, but that linearity of (remaining) data is again established in the data (albeit in not the same exact way as it was in the raw data) - and that linearity is the crucial element for image data to be treated rightly by scene-referred workflow albeit with with pixel pipe re-configured by the v3.0 JPEG setting?
  • Under this JPEG configuration we shall in the same manner as for RAWs set exposure by fixating middle-grey and disregard any “overshooting” end data (if there are still any such data from the outset where white was 1, but can this change by the lineariztion?), and rather take care of highlights/shadows with filmic rgb/sigmoid module? (I thought one needed rather to apply tone curve module.)

If filmic/sigmoid is the recommended way to go, is there any reason to believe that one of them should generally give a better result than the other?

Is there then any other general advice on how jpeg files ought to be edited differently from raws? (I’ve noted the potential banding troubles etc. that afre has pointed to one should prepared to encounter with a jpeg file rather then with raws.)

Sounds plausible, but the only way I’d trust to do such reverse engineering would be if the JPEG contained an embedded ICC profile that described both the gamut AND tone transforms applied to make the JPEG image. The ICC TRC would then be the authoritative source for the inverse curve required to skew back to linear.

You can’t just assume the JPEG is encoded as sRGB. And, sRGB has different TRC interpretations. See:

https://en.wikipedia.org/wiki/SRGB

Scroll down to “Transfer Function”

1 Like

This is a very untechnical approach I’m afraid, but I’ve had success by initially setting up sigmoid to as close as possible to 'straight through" then doing my editing in the usual way. Not always though.

1 Like

My experience is that this is not something you can ever trust - the ICC profile is designed so the image displays properly, including the “baked in” tone curve. This came up recently in that I think Nikon DOES embed an ICC profile - but it is clear that the ICC profile is intended for displaying the image, not for properly postprocessing it further - e.g. the ICC profile does NOT describe the baked-in tone curve used for tonemapping/dynamic range compression.

In this case you just have to assume ALL of your metadata is wrong and reverse engineer it yourself.

That’s using:

  1. Robertson’s or Debevec’s algorithms to determine the actual tone curve used, followed by
  2. DCamProf of a color chart that has been linearized by the results from step 1 to reverse engineer the gamut.

Ah, you’re right, the ICC profile assumes the input is linear, when due to the filmic/sigmoid/whatever tone curve it is not. Thing is, it’s the only metadata that describes such (except for the “sRGB/Adobe/whatever” tags)… hmmm.

Okay, based on that I really don’t think it makes sense to process a JPEG with the scene-referred workflow. If one is forced to deal with a JPEG as input, make it up as you go, but the IMHO the only sane approach is to start with raw to make ANY rendition…

Sometimes you have no real choice, but yeah, if you CAN get a raw, that is the way to go.

I started down the path of characterizing my camera’s picture profiles not for the purposes of postprocessing JPEGs (but it CAN be used to do that!), but for the purposes of having more options as far as postprocessing video. Being able to linearize the “undocumented” picture profile tone curves is useful given that some of the “documented” tone curves have various flaws and/or force you to choose an undesirable gamut. Shooting raw is $$$$$$$$ for video still.

Overall now that I have a camera that does 10-bit video, there’s less of a need to bother with reverse engineering as more than just a curiosity though, since all of the reasons to avoid S-Log3 and S-Gamut are specific to using it with 8 bits/pixel depth.

1 Like

But isn’t that to be expected? The embedded profile is supposed to describe how the colours are encoded, i.e what colour should be displayed for a given (r,g,b) triplet. And this should be completely independent from any processing done to the image…

It also means the metadata is not wrong, it just doesn’t have the data you want it to have.

1 Like

Yes, the ICC display profile was intended for transforming the in-situ image to the display gamut and tone, not for reverse-engineering the image to scene-linear. That’s where I’m hung up in the intent of the thread; one can’t slew the JPEG display rendition back to scene-linear with the metadata provided.

A recent ipol paper springs to mind. I tested parts of it, but not the whole thing yet.

One can’t get the original scene values back, indeed.
But is that important for editing? I’d expect it to be sufficient to get to a state where the signal is proportional to the represented light energy and unbound, you don’t really need everything in the “raw” state.
(Similar to further editing of an image to which a tone curve has been applied: you’re still editing in a scene-referred space, where scene-referred stands for “unbound and linear in energy”)

If we could make the signal proportional etc that would be sufficient. But, sadly, we can’t do that from a JPEG image. It may have been subject to any prettifying processing.

However, what we can do is pretend that converting the sRGB image to linear RGB is sufficient. We can pretend the signal is proportional etc. Within the limits of that pretense (and the 8-bits and lossy compression), that may be good enough.

At least for the purposes of recovering scene-linear, you really only need the first two steps (invert tonemap, invert gamma) - and in fact it usually winds up easier to just treat those two as one step, as there are well-known algorithms for determining the combination of the two.

Or, spend some time characterizing the camera. If the camera is doing any local tonemapping, that can be detected as a data quality issue when trying to recover the global tonemapping via Debevec/Robertson. (If the camera DID do any local tonemapping, or did a hue-preserving global tonemapper, you’re hosed - but the vast majority of cameras on the market do neither of these by default. At least not until you go to mobile phones - then automatic local tonemapping is standard.)

1 Like

I guess you mean given a comparison pair for that? Or is it really a “blind” method?

Edit: or a particular input image put through the transform

You need a bracketed sequence of images, which can be run through debevec’s or robertson’s algorithm. Robertson’s is more computationally intensive, but seems to give a cleaner solution. OpenCV’s implementation of both is a little meh, LuminanceHDR’s implementation actually works quite well after I experimented with using that to determine the response function. See Doest it make sense to process a .JPEG camera file under scene-referred paradigm? - #20 by Entropy512 earlier in this thread.

1 Like

OOC JPEGS are often sharpened, which increases local contrast. This will mess up any estimation of global tonemapping. This might be minor, but I often see JPEG sharpening that has caused halos. I notice that the IPOL paper by Dewil doesn’t mention this.

1 Like

I am not sure what you mean by the second bullet point. With default presets, DT 4.2 just activates the orientation for me for JPEGs, everything else is inactive. With the workflow: display-referred, the display-referred set of modules shows up, including the base curve, color zones, shadows and highlights, etc. It is recommended that you use these to work on the image using these, with the v3.0 JPEG module order.

While you can use scene-referred modules like exposure, it does not make a whole lot of sense (except maybe for artistic purposes, but then anything goes). For minor corrections, the effect can be innocuous (so the manual used to recommend tiny changes), but you can easily get nonsensical results. But with JPEGs, the best that you can expect is that you get away with minor corrections without disturbingly large artifacts (minor artifacts that are only visible when pixel-peeping are to be expected, and are usually innocuous).

A lot of photo processing software packages, especially before the 2000s, invested a lot into processing JPEGs, but that is not a game that can be won; usually local modifications that would amount to \pm 1.5 EV changes in scene-referred inevitably give you artifacts, especially if with strong hues.

From your description, it is unclear what you want to do with the photos. If you could be more specific, it would be easier to help.