Processing RAWs for HDR displays in Darktable

Thanks for documenting and sharing…

2 Likes

Content delivery is the big challenge - especially for stills.

I have found that the only widely-compatible way to deliver HDR stills to an HDR display is to use ffmpeg’s loop function along with the zscale filter to take 16-bit linear Rec2020 TIFFs and encode them to 10-bit HLG H.265 - H.265 video is the only widely compatible file format for HDR delivery.

Also of note: In my experience, 90%+ of HDR content out there (Netflix, Disney+, etc) appears to have still been tonemapped to some degree with an S-curve, just a less aggressive one than that which is usually used to compress dynamic range to fit on an SDR display. Only “HDR demo” style content doesn’t have any pre-compression. This is probably done to improve compatibility, since there’s a huge variation in HDR display capabilities (number of dimming zones, average brightness above which display power limits are hit, gamut, etc.)

1 Like

Indeed, I remember hearing that even for video content, HDR was still a mess due to the many competing standards and the large number of displays on the market, all of which have different characteristics. And this is despite the movie industry being worth billions of $.

Thanks a lot for your suggestion to use H.265! Although this sounds a bit hacky, it could still be a nice workaround for uploading pictures online (this should work particularly well for websites which auto-loop short videos) or showing them using a phone/tablet which does not support EXR (and has finite storage…). I will give this method a try!

Yes, in general it probably makes sense to apply a light tone mapping in order to retain some control over the final look of the image. If I were sharing HDR photos, I would probably target some fixed headroom level that should work for most HDR displays in the wild in average lighting conditions (e.g. 5 assuming a reasonable peak luminance of 500 cd/m² and a SDR luminance of 100 cd/m²). On the other hand, if I know beforehand the device and lighting conditions, I would probably think in terms of absolute luminance and I would allow for more headroom. But all of this assumes that I eventually get the RGB curve (or filmic) module to work with extended luminance values… Until then, I’ll have to stick to the HDR demo style.

One “hack” I’ve sometimes used is to map whatever the peak luminance of the transfer function is to a linear value of 1.0, which makes it play nicer with “non-HDR-aware” pipelines.

For example, dividing everything in the HLG standard by 12. (I think peak luminance is “12.0”) - which makes it behave just like how most cameras handle these modes anyway - peak luminance maps to sensor clip point.

The negative here is that all previews on an SDR display when doing this hack will be severely underexposed. But the same thing will happen if you shoot RAW+JPEG in a camera that is doing this! (For example, Sony S-Log3 shot at ISO800 will be underexposed by three stops if you calculated exposure using the camera’s displayed ISO rating, because the ISO rating is derived from JPEG behavior, and ISO800 S-Log3 uses the same raw sensor gain configuration as “no picture profile” at ISO100)

I’ll try to dig up some links tomorrow to:
My ffmpeg commandline
A discussion of alternative transfer functions that was focused on Panasonic V-Log, but applicable to pretty much any alternative transfer function as implemented by a camera that can shoot RAW when an “alternative” transfer function is in play. (The short summary is - the camera’s ISO rating gets shifted but nothing else changes from the perspective of raw sensor data).

As far as multiple HDR standards - HDR10 is the widest supported, but HLG is extremely widely supported too, most recent (<4 years old) HDR displays will support HLG and HDR10. Avoid Dolby Vision, it’s a pile of proprietary crap. Samsung-pioneered HDR10+ only varies from HDR10 by allowing frame luminance metadata to change dynamically during playback, which doesn’t really affect you if you just loop a single image into a short video clip. HLG is the easiest to work with since the only metadata you need is the Alternate Transfer Curve (ATC) SEI flag - to the point where some people claim “no metadata is needed” - they’re lying - ATC SEI is metadata, just fairly simple metadata that says “this content is HLG!”

1 Like

Have you tried enabling any of the available EXR compressions schemes, e.g. ZIP or PIZ for ~2-2.5x saving? (Might want to stick w/ ZIP as a widely used standard, and as OpenEXR is updated across platforms it should start using libdeflate which is much faster but still compatible w/ older ZIP encoded files.)

This also reminds me that we probably also want to enable exporting as 16-bit float, which should be sufficient to support HDR images for delivery. Another 2x saving there.

BTW, there is also nothing wrong w/ float TIFF - it is just a container format and can store the same 32-bit float pixel values as EXR. As a plus, libdeflate is already available there via recent libtiff (on most platforms?), and you can also turn on horizontal difference predictor to squeeze out some extra compression. And it also supports ICC profiles unlike EXR. Unfortunately we’re also missing 16-bit float TIFF export in dt currently…

1 Like

I tried something similar here, just targeting mid-gray. It looks good on SDR display, but I guess still sacrifices quite a bit of highlights since there is clipping above 1.0, and only 1.36 EV was “recovered” (i.e., the peak doesn’t quite get up to 12…).

1 Like

Yup - my ffmpeg command line is there.

Ignore the pedantry that causes people to complain that a command which actually works should not. It does.

One thing that may help in managing how HLG looks on an SDR display is to think of it as similar to a camera’s response curve:

That’s from an ongoing effort to reverse engineer all of the picture profile behaviors of my A7III so that any of them could be linearized. This plots the effective linear S-curve that would be needed to generate the measured (or for HLG, calculated) response if a standard piecewise sRGB transfer function were applied after it. (e.g. RawTherapee’s tone curve, or darktable basecurve as the last operation before conversion/export to sRGB). Note that I suspect there’s a bug somewhere in what I’m doing, as RT’s AMTC gives a very different curve than what I’ve measured for Sony’s “no profile” aka “default stills” mode. I need to go back through everything again.

You’ll notice that the bottom of the “S” for HLG is missing there which is (in addition to the inherent desaturation of displaying Rec.2020 gamut content on a Rec.709 gamut display without proper conversion) why HLG in “backwards compatibility” mode usually looks desaturated and washed out.

It’s also something I still don’t understand about HLG - I can see how the transfer curve should be somewhat backwards compatible, but there’s basically no discussion of the gamut issue - and the gamut mismatch in “fallback” mode leads to it being really fugly.

Which seems to be why all actual deployed content delivery systems (streaming services such as YouTube) don’t rely on the supposed backwards compatibility of HLG, and instead tonemap it themselves. (Witth YouTube giving you the option of providing your own LUT for the task.)

1 Like

One “hack” I’ve sometimes used is to map whatever the peak luminance of the transfer function is to a linear value of 1.0, which makes it play nicer with “non-HDR-aware” pipelines.

Yep, I actually tried this hack some time ago by exporting to linear 16-bit TIFF with the highest luminance mapped to 1.0. It looks good, but the issue is that it is not recognized by the viewer as HDR content, therefore I have to manually increase the brightness when viewing the files, and I cannot access luminance levels above the max SDR luminance (which is capped at 500 cd/m² on my laptop).

HLG is the easiest to work with since the only metadata you need is the Alternate Transfer Curve (ATC) SEI flag

Indeed HLG seems quite easy to work with! I’ll have a look at the thread you mentioned, and try to use ffmpeg to create some H.265 file and view it on a couple of devices. Do you directly export to HLG in Darktable, or do you export in linear and then let ffmpeg take care of the conversion?

I tried all the available compression schemes, but IIRC only lossy PXR24 compression lead to some gain (it reduced the file size by about half).

This also reminds me that we probably also want to enable exporting as 16-bit float, which should be sufficient to support HDR images for delivery.

That would be nice! Thanks for the link btw.

BTW, there is also nothing wrong w/ float TIFF

My only issue with it is that I could not find any viewer which triggers the HDR mode when opening a TIFF file (although the abovementioned hack works). The maximum compression level saves about 25%.

I exported to 16-bit linear and used the zscale filter (see example in that thread) to do the transfer function conversion.

Note that as of 1-2 years ago, only software encoding in x265 could set the ATC SEI flag required to have a TV trigger HLG mode properly. Don’t know if that’s supported yet for NVENC or VAAPI yet. I had a patch that hacked it in for VAAPI. Distro-included versions of ffmpeg will likely lack the zscale filter needed for this.

Exporting HLG from darktable and then transcoding might also work. Honestly the next time I do it I’m probably going to do a Rec.2020 TIFF and just throw it into Resolve so I can preview it in realtime - Blackmagic’s Decklink/Ultrastudio product lines are one of the only ways to do 10-bit HDMI or SDI output in Linux at the moment.

1 Like

So, I managed to produce a video file encoded in H.265. It is indeed quite space-efficient (15-20× smaller than the TIFF file at the same resolution).

Opening it in VLC, the colors look good, however it is not rendered as HDR and it clearly lacks contrast. It seems that VLC is tone-mapping it by simply reducing its contrast until it fits the SDR range.

Using IINA, the contrast is good, but the video is dim. It seems to uniformly scale down the luminance values. So this basically look like the TIFF.

I also tried the proprietary viewer Infuse 7, but the result looks just as bad as VLC.

And of course, QuickTime being QuickTime, it simply won’t play H.265…
Actually, QuickTime can play the file if you pass the option -tag:v hvc1 during encoding. It looks like VLC and Infuse 7, but brighter (still not above the max SDR luminance).

I might try to play a bit with the ffmpeg command line options to see if I can improve the situation, but I don’t have much hope.

So it seems we have traded the problem of displaying HDR stills into finding a video player which can successfully play H.265 in HDR :sweat_smile: Maybe other codecs would work better.

I am now starting to think that pushing for JPEG XL support might not be the least reasonable option.

Don’t want to sound pessimistic, but since we’re not there yet with the HEIF/AVIF ecosystem after several years and investment by big players, what makes you think JXL uptake and (good) SW implementation of both content creation apps and viewers is going to be quicker/better?

I have a few gripes, at least when it comes to JXL support in darktable:

  1. Library API is not feature complete yet; I don’t think we’ll keep developing this until that happens (but it should be soon from the looks of it)
  2. The encoding options seem overly complex and incomprehensible to me, so there is a real risk of not getting it right (any help deciphering and testing those is welcome; the overall complexity of the codec makes me think it’ll take longer for its uptake, not quicker than, say, AVIF)
  3. There seems to be quite a bit of rendering/tone mapping responsibility assumed by the codec, which, to me, seems a bit strange (a storage format IMHO should get out of the way and leave rendering to apps); hope it can be disabled/skipped…
2 Likes

I’ve kind of hinted at this, but to be clear:

Anything that is not a dedicated video playback device (Smart TV, Chromecast, Android TV, etc) is likely to be problematic. However, the majority of HDR displays in existence are Smart TVs with H.265 support. (Some lack HLG support, HDR10 is the lowest common denominator that is supported by nearly everything, but requires a lot more metadata in the file about frame average luminance, etc.)

In my case:
USB to Vizio P65-F1
Google Chromecast using LocalCast on Android or mkchromecast on Linux
Built-in YouTube app on the TV or CCGTV
(Caveat: While the Google Cast API does HDR, and YouTube does HDR, you get 1080p SDR if you attempt to combine them…)
(Caveat 2: YouTube states nothing about uploading CRF/CQ video being a bad thing, but CRF/CQ video seems to randomly cause YouTube to refuse to deliver an HDR version of a video - it’ll recognize it as HDR, it’ll tonemap it down, but the HDR version will never become available. VBR seems to be necessary.)

H.265 + PQ is the most compatible/widely supported format, but it still has a massive pile of caveats. H.265 + HLG is probably the second most compatible/widely supported - but still, lots of caveats. It took Google over a year to fix HLG support in CCGTV - it would tell the display it was HDR10 and send HLG data, which would behave similiarly poorly to a widely known bug of the device getting stuck sending an HDR10 flag when dropping back into SDR mode on occasion.

1 Like

Even DWA compression? DWA is lossy but with a user settable quality factor. I don’t know how it is in terms of compatibility…but that should give you good compression deliverables.

1 Like

DWA compression does not seem to be exposed in DT (3.8.0)

image

1 Like

I am not very familiar with the technical details, so I will trust you on that. Hopefully the API issues will be resolved once the reference implementation is officially published.

As far as uptake is concerned, it seemed to me that there was strong interest from some big players like Adobe and Facebook. In particular, the lossless JPEG transcoding (and the associated reduction in bandwidth) could be a big driver for adoption by social media platforms, especially since there is no need to wait until all browsers support JXL for the benefits to start to appear (e.g. serve the JXL file if the browser supports it, otherwise fall back to JPEG). Of course this does not guarantee that the HDR functionality will ever be exposed…

I’d be happy with AVIF, too, if it sees broader adoption :slight_smile:

Regarding HEIF, my understanding is that uptake was hindered by potential patent issues. Please correct me if I am wrong.

I’ll have a look at the issue you linked to. Not sure I can help much on the Darktable side. However, if I manage to find or produce a “reference” JXL file of an HDR image, I might play a bit with the JXLook viewer to see if I can get it to render correctly (the code did not seem to be overly complicated).

Oh, I see. I have to admit that I don’t own a TV, and that almost all of the non-still pictures I watch come from YouTube :sweat_smile: So I did not realize the support was that bad on non-dedicated devices.

I don’t know what you mean here — Windows 10 and OS X have supported HEIF for years now, also Ubuntu 20.04 (LTS), and Android since 8. This means that if you updated your device within the last 2 years (and you should do that more often, for security), it supports HEIF.

I expect that something similar will happen to JXL once it is matures a bit. Early adoption will mean that you will face a few hurdles, but that’s all, sooner or later everyone will be able to read it.

Will camera manufacturers get their act together and adopt one of these formats to replace JPEG? That’s hard to say, and may not be relevant: people who edit their images in any nontrivial way (beyond cropping and “artistic” filters for Instagram) may already be using RAW, and JPEG is certainly good enough for social media etc.

See the links to other forum posts - there are still plenty of apps that render HEIF/AVIF content in the wrong way. We wouldn’t be talking about this if it was as well implemented/hashed out as JPEG (which to be fair, is less complex in terms of color management features)…

1 Like

Will add this w/ 16-bit float support as well.

1 Like

Please be more specific. There are always bugs, but without details it is impossible to know if they are still relevant or have been fixed.