Processing RAWs for HDR displays in Darktable

Lately, I have been experimenting with “display HDR” photography¹, i.e. producing files which, when viewed with a compatible viewer and a compatible display, produce an image with a potentially much higher contrast (and larger gamut volume) than what is achievable with a SDR display or a SDR format like JPEG.

¹Yes, we need a better (and more easily searchable) name for that, that distinguishes itself from what is usually referred to as “HDR photography” i.e. HDR tone mapping. I await your suggestions!

As we will see, this can lead to images that look much more natural and pleasing to the eye than the “SDR” images to which we are used to. This is not very surprising since the human eye can perceive a much wider range of luminance and colors than what a SDR display can reproduce.

I got interested in this topic after viewing some HDR videos on the mini-LED iPad Pro last year. The device left quite an impression on me, and since then I have been wondering how I could leverage this technology for displaying photos. I do not print much, therefore a monitor (or a projector) is a perfectly reasonable output medium for me, and I don’t need to limit my edits to the limited dynamic range of paper.

A few months ago, I came across this post by @paulmiller on PlayRaw, where he showed that the built-in Preview app in macOS was able to trigger the HDR mode when opening .exr or .hdr files. Since I was getting a mac to replace my old, dying laptop, I decided to give it a try.²

²This should probably work too with any recent HDR-capable laptop and a compatible viewer. An open-source viewer for EXR files, that supports 10-bit HDR, is tev. I found it a bit glitchy, but it otherwise worked.

Requirements:

In order to produce and view an HDR image, we need the following things:

  1. An input file capable of representing a scene with a high dynamic range. I am using images captured using a Nikon Z6 (which has a dynamic range of about 14 EVs) and exposed “to the right” (or close) in order to preserve highlights. This should be plenty of dynamic range already. For more dynamic range, one can create a .dng from multiple bracketed exposures.

  2. A RAW processing software that can handle HDR data. The scene-referred workflow in Darktable works fine, however I could not get filmic to work so I had to disable this module. (But this could just be my fault since I am not very familiar with it.) Importantly, the processing pipeline must not truncate RGB or luminance values which are above 1.0.

  3. A file format that can store HDR data. Depending on the specific format and color space, this can involve RGB values above 1.0, an extra exponent E, or a different electro-optical transfer function (like the PQ curve). The experts here can probably explain this much better and accurately than I can. Here I will be using the EXR format (which stores RGB values using 32-bit floats) and the linear Rec2020 RGB color space.

  4. An image viewer that can leverage the full dynamic range of the display in order to display the HDR image. I mainly use macOS Preview, but tev also works.

  5. A display which can show HDR content. I will be using the built-in display of the 2021 16’’ MacBook Pro, which has a maximum luminance of 1600 cd/m² (= nits). The SDR luminance (corresponding to (r,g,b) = (1.0,1.0,1.0)) can be set to anything from 48 cd/m² to 500 cd/m² depending on the viewing conditions and how much headroom we want (e.g. for 100 cd/m², the headroom is 1600/100 = 16, i.e. the display can show highlights up to 16× brighter than the “diffuse white”). Note that you cannot obtain fully saturated colors at the highest luminance levels. See this post on AVSForum about color volume for more details.

How to process a RAW file for HDR in Darktable:

Here is a step-by-step guide for producing a HDR .exr file in Darktable, starting from a RAW file. I have not yet tested how the more “creative” modules work with HDR, so for now I will aim for a very natural look. As you will see, this is surprisingly easy.

For reference, I am using Darktable 3.6.1 for macOS (x86), installed using Homebrew.

  • The first step is to disable any module that performs dynamic range compression, such as filmic (in the scene-referred workflow) or the base curve (in the display-referred workflow).
    Here is a good starting point:
    image

  • It is best to use an image which is exposed to the right (ETTR), i.e. its histogram should reach “just before” the right-side of the plot.


    • This is quite good!

    • This is still okay: some highlights are clipped, but these are specular highlights, and it is not always possible to preserve them with a single exposure. The bulk of the image is not clipped.

    • This is fine too, but suboptimal: you are wasting some of the precious EVs you paid for when buying your camera.

    • This is bad. Highlights are clipped and cannot be recovered. “Flat” highlights with no details won’t look good in HDR.
  • Then, apply all corrective modules (lens correction, white balance, highlight reconstruction, astrophoto denoise, etc…).

  • Now comes the tricky part: adjusting the exposure. This is tricky for a number of reasons:

    1. First, Darktable 3.6 currently cannot display pixels with luminance > 1.0 in the image preview (the situation is even worse on macOS, where the preview is limited to sRGB due to a longstanding “impedance mismatch” between the Cairo and macOS APIs). This means that we are basically blind when working on the image. A workaround is to export often and use an external viewer for assessing the result.
    2. There is a more fundamental issue: Even a peak display luminance of 1600 cd/m² is not enough to reproduce the absolute luminance of the captured scene, which can easily reach tens of thousands of cd/m² (see e.g. this Wikipedia article). Not that it would be desirable, unless we want to burn the viewer’s retinas with our sunset pictures :sweat_smile: The situation is even worse for saturated highlights, since the display cannot reach its peak luminance.
      So, like in the SDR case, we need to perform dynamic range compression. But, unlike in the SDR case, we do not necessarily need to do it ourselves (unless we want full control over our image). For instance, the macOS Preview app seems to apply some reasonably-good tone mapping when loading the image. Since the filmic and RGB curves modules do not seem to be HDR ready yet, here I will take the lazy approach of just applying a global exposure correction, and leave further tone mapping to the image viewer.
    3. We still have a problem: We need to uniformly scale the scene luminance in order to make it fit into the target display’s luminance. But how do we choose the scaling factor? We cannot directly use Ansel Adams’ zone system since there is no “pure black” or “pure white” anymore, nor is there “ink black” or “paper white”, and HDR preserves a lot of texture in both shadows and highlights, making it difficult to define zones I–II and VIII–IX. I guess colorists working for the movie industry probably have some solution to this, but I am not very familiar with their work. As a workaround, what I currently do is choose some “reasonable” SDR brightness (I have created presets for 48, 100 and 160 cd/m²) that still gives enough headroom, and map the typical scene luminance to this value, taking into account the lighting conditions (daylight, shade, night) of the scene. At the very least, this ensures that the relative luminances of the pictures in my “virtual slide tray” are consistent.

    In order to obtain consistent luminances between pictures, I have found useful to enable the “ISO 12646 color assessment conditions” (the little lightbulb at the bottom right of the image preview: image).
    I then adjust the exposure slider until the main subject looks correctly exposed (taking into account the lighting conditions of the scene), without thinking about the highlights. Its luminance should be “of the same order” as the white border. Note that this will create massive gamut clipping in the preview image! This is because Darktable cannot currently display extended luminance values. With a bit of practice, the gamut clipping indicator image can actually become convenient for precisely adjusting the exposure, since it effectively shows which part of the image is above the SDR brightness.
    This is how the image preview looks, with and without the clipping indicator:


    In this example image, I chose to map the clouds to extended luminance values (> 1.0), while the reflection remains entirely in the SDR range.

    Spoiler: this is how the final image will look in Preview:


    As you can see, the sky isn’t actually clipped.

    Note that when exposing to the right and processing for HDR, the exposure correction can easily be large (e.g. +5 EV). The picture above is far from being an extreme example.
    At this point, the history stack would typically look like (possibly in a different order):
    image

  • The last step is to export our photos to a HDR-compatible format. As we will see, the lack of compatible formats and viewers is currently (in my opinion) the main bottleneck which makes distributing HDR images difficult. As discussed above, I will be using the EXR format. This is a lossless, floating-point format, which allows for a huge dynamic range (more than we probably need for photography) at the cost of producing very large files. The Radiance HDR format .hdr seems to be more space efficient, but it does not seem to be supported in Darktable.
    Here are the options that I use for exporting to EXR:
    image

Results

Here are some sample EXR files, which I have produced using the above method. Even if you only have a SDR monitor, you may be able to play with the exposure slider in the tev viewer to assess their dynamic range. These are not necessarily the best pictures, but they are good examples in the sense that they take advantage of the additional dynamic range allowed by the EXR format.

_DSC0576.exr (6.7 MB)
_DSC0942.exr (6.7 MB)
_DSC1087.exr (9.0 MB)
_DSC1645.exr (9.0 MB)
_DSC4195.exr (9.0 MB)
_DSC5433.exr (9.0 MB)
_DSC5464.exr (9.0 MB)

In order to save storage space, you may want to reduce the resolution when exporting. But who doesn’t like to view their pictures in glorious 6K HDR? :upside_down_face:
Well, you storage device, or anyone you plan to send this picture to, might not like it… Because a 24MP EXR image weights no less than 300 MB. This is 10× more than the actual RAW file! I actually had to rescale the above pictures heavily in order to avoid hammering the pixls.us servers.

Storage

This brings us to the main issue with HDR images. There is currently no efficient file format with widespread support. EXR and Radiance HDR are old and heavy but are supported by some viewers which can take advantage of HDR displays, while HEIC and AVIF have limited support on some platforms and are derived from video codecs which are not optimized for stills. After reading about the different formats, it seems to me that our best bet might soon be the JPEG XL (.jxl) format, which is optimized for stills, backward-compatible with JPEG, and isn’t patent-encumbered. However, it just got standardized (the standard should be published this month). But there is already beta support in Firefox (without HDR, it seems), Chrome, Facebook, and some big names like Adobe have expressed interest. This is definitely a format to keep an eye on as far as HDR photography is concerned.

(If you are an Apple user, please use this form to request support for JPEG XL in macOS and state that you are a photographer. You can probably select “Finder” as feedback area. The more users pester them, the higher the chance that they end up implementing it, especially since it is not patent-encumbered.)

Final thoughts

From a photographic point of view, these very preliminary experiments have shown me that HDR can actually produce very natural images, with comparatively little effort (no need to stuff the entire dynamic range of the scene into that of an SDR display). And for me less editing = more time outside taking pictures. Of course, I did not even touch on the creative modules of Darktable, since many of them do not seem to have the controls to work with HDR images yet (although the plumbings should work as long as we use the scene-referred workflow). For instance, I cannot add control points above 1.0 in the RGB curve, and its axes are not logarithmic.

Of course, not all scenes, nor all artistic intents, will benefit from HDR. In some cases, it can even be detrimental: think about some distracting highlights in the corner of an otherwise well-composed shot. If the image is processed without dynamic range compression, this would attract the viewer’s eyes like UV light attracts bugs, and it could detract their attention from the main subject. I also expect that some instagrammers will see this new technology as a good opportunity to push the contrast to 1000% and burn our retinas :fire: (or worse: 1000 nits blinking ads). So, as always: this is one more tool for photographers, that needs to be used wisely, but this is a particularly powerful one, especially for landscape photographers.

Something I haven’t had the time yet to experiment with, is producing HDR images from multiple exposures (i.e. remaster some old tone-mapped HDRs to take advantage of the larger dynamic range of HDR displays). In particular, since the MBP display is advertised with a contrast ratio of 1000000:1 (= 20 EV) thanks to local dimming, it naively seems that a single exposure cannot fully exploit its dynamic range (although there might be some technical limitations which I am not aware of). So far, I have tried exporting to EXR an old DNG produced in Darktable from 3 bracketed exposures, but the result looked surprisingly flat.

To finish, I would like to invite you to experiment with HDR in Darktable (or your favourite FOSS editor) if you have some compatible hardware. In particular, I would be interested to see if there exist other combinations of file formats and viewers (FOSS or not, on PC or phone, etc…) which can successfully display HDR stills. I would also like to see which Darktable modules already work (or can be made to work) with HDR.

Finally, if you have any tips on how to consistently grade HDR images, I will happily listen :slight_smile:

All the images from the present post are released under the CC0 license.

11 Likes

A previous thread on the same topic:

3 Likes

Thanks for documenting and sharing…

2 Likes

Content delivery is the big challenge - especially for stills.

I have found that the only widely-compatible way to deliver HDR stills to an HDR display is to use ffmpeg’s loop function along with the zscale filter to take 16-bit linear Rec2020 TIFFs and encode them to 10-bit HLG H.265 - H.265 video is the only widely compatible file format for HDR delivery.

Also of note: In my experience, 90%+ of HDR content out there (Netflix, Disney+, etc) appears to have still been tonemapped to some degree with an S-curve, just a less aggressive one than that which is usually used to compress dynamic range to fit on an SDR display. Only “HDR demo” style content doesn’t have any pre-compression. This is probably done to improve compatibility, since there’s a huge variation in HDR display capabilities (number of dimming zones, average brightness above which display power limits are hit, gamut, etc.)

1 Like

Indeed, I remember hearing that even for video content, HDR was still a mess due to the many competing standards and the large number of displays on the market, all of which have different characteristics. And this is despite the movie industry being worth billions of $.

Thanks a lot for your suggestion to use H.265! Although this sounds a bit hacky, it could still be a nice workaround for uploading pictures online (this should work particularly well for websites which auto-loop short videos) or showing them using a phone/tablet which does not support EXR (and has finite storage…). I will give this method a try!

Yes, in general it probably makes sense to apply a light tone mapping in order to retain some control over the final look of the image. If I were sharing HDR photos, I would probably target some fixed headroom level that should work for most HDR displays in the wild in average lighting conditions (e.g. 5 assuming a reasonable peak luminance of 500 cd/m² and a SDR luminance of 100 cd/m²). On the other hand, if I know beforehand the device and lighting conditions, I would probably think in terms of absolute luminance and I would allow for more headroom. But all of this assumes that I eventually get the RGB curve (or filmic) module to work with extended luminance values… Until then, I’ll have to stick to the HDR demo style.

One “hack” I’ve sometimes used is to map whatever the peak luminance of the transfer function is to a linear value of 1.0, which makes it play nicer with “non-HDR-aware” pipelines.

For example, dividing everything in the HLG standard by 12. (I think peak luminance is “12.0”) - which makes it behave just like how most cameras handle these modes anyway - peak luminance maps to sensor clip point.

The negative here is that all previews on an SDR display when doing this hack will be severely underexposed. But the same thing will happen if you shoot RAW+JPEG in a camera that is doing this! (For example, Sony S-Log3 shot at ISO800 will be underexposed by three stops if you calculated exposure using the camera’s displayed ISO rating, because the ISO rating is derived from JPEG behavior, and ISO800 S-Log3 uses the same raw sensor gain configuration as “no picture profile” at ISO100)

I’ll try to dig up some links tomorrow to:
My ffmpeg commandline
A discussion of alternative transfer functions that was focused on Panasonic V-Log, but applicable to pretty much any alternative transfer function as implemented by a camera that can shoot RAW when an “alternative” transfer function is in play. (The short summary is - the camera’s ISO rating gets shifted but nothing else changes from the perspective of raw sensor data).

As far as multiple HDR standards - HDR10 is the widest supported, but HLG is extremely widely supported too, most recent (<4 years old) HDR displays will support HLG and HDR10. Avoid Dolby Vision, it’s a pile of proprietary crap. Samsung-pioneered HDR10+ only varies from HDR10 by allowing frame luminance metadata to change dynamically during playback, which doesn’t really affect you if you just loop a single image into a short video clip. HLG is the easiest to work with since the only metadata you need is the Alternate Transfer Curve (ATC) SEI flag - to the point where some people claim “no metadata is needed” - they’re lying - ATC SEI is metadata, just fairly simple metadata that says “this content is HLG!”

1 Like

Have you tried enabling any of the available EXR compressions schemes, e.g. ZIP or PIZ for ~2-2.5x saving? (Might want to stick w/ ZIP as a widely used standard, and as OpenEXR is updated across platforms it should start using libdeflate which is much faster but still compatible w/ older ZIP encoded files.)

This also reminds me that we probably also want to enable exporting as 16-bit float, which should be sufficient to support HDR images for delivery. Another 2x saving there.

BTW, there is also nothing wrong w/ float TIFF - it is just a container format and can store the same 32-bit float pixel values as EXR. As a plus, libdeflate is already available there via recent libtiff (on most platforms?), and you can also turn on horizontal difference predictor to squeeze out some extra compression. And it also supports ICC profiles unlike EXR. Unfortunately we’re also missing 16-bit float TIFF export in dt currently…

1 Like

I tried something similar here, just targeting mid-gray. It looks good on SDR display, but I guess still sacrifices quite a bit of highlights since there is clipping above 1.0, and only 1.36 EV was “recovered” (i.e., the peak doesn’t quite get up to 12…).

1 Like

Yup - my ffmpeg command line is there.

Ignore the pedantry that causes people to complain that a command which actually works should not. It does.

One thing that may help in managing how HLG looks on an SDR display is to think of it as similar to a camera’s response curve:

That’s from an ongoing effort to reverse engineer all of the picture profile behaviors of my A7III so that any of them could be linearized. This plots the effective linear S-curve that would be needed to generate the measured (or for HLG, calculated) response if a standard piecewise sRGB transfer function were applied after it. (e.g. RawTherapee’s tone curve, or darktable basecurve as the last operation before conversion/export to sRGB). Note that I suspect there’s a bug somewhere in what I’m doing, as RT’s AMTC gives a very different curve than what I’ve measured for Sony’s “no profile” aka “default stills” mode. I need to go back through everything again.

You’ll notice that the bottom of the “S” for HLG is missing there which is (in addition to the inherent desaturation of displaying Rec.2020 gamut content on a Rec.709 gamut display without proper conversion) why HLG in “backwards compatibility” mode usually looks desaturated and washed out.

It’s also something I still don’t understand about HLG - I can see how the transfer curve should be somewhat backwards compatible, but there’s basically no discussion of the gamut issue - and the gamut mismatch in “fallback” mode leads to it being really fugly.

Which seems to be why all actual deployed content delivery systems (streaming services such as YouTube) don’t rely on the supposed backwards compatibility of HLG, and instead tonemap it themselves. (Witth YouTube giving you the option of providing your own LUT for the task.)

1 Like

One “hack” I’ve sometimes used is to map whatever the peak luminance of the transfer function is to a linear value of 1.0, which makes it play nicer with “non-HDR-aware” pipelines.

Yep, I actually tried this hack some time ago by exporting to linear 16-bit TIFF with the highest luminance mapped to 1.0. It looks good, but the issue is that it is not recognized by the viewer as HDR content, therefore I have to manually increase the brightness when viewing the files, and I cannot access luminance levels above the max SDR luminance (which is capped at 500 cd/m² on my laptop).

HLG is the easiest to work with since the only metadata you need is the Alternate Transfer Curve (ATC) SEI flag

Indeed HLG seems quite easy to work with! I’ll have a look at the thread you mentioned, and try to use ffmpeg to create some H.265 file and view it on a couple of devices. Do you directly export to HLG in Darktable, or do you export in linear and then let ffmpeg take care of the conversion?

I tried all the available compression schemes, but IIRC only lossy PXR24 compression lead to some gain (it reduced the file size by about half).

This also reminds me that we probably also want to enable exporting as 16-bit float, which should be sufficient to support HDR images for delivery.

That would be nice! Thanks for the link btw.

BTW, there is also nothing wrong w/ float TIFF

My only issue with it is that I could not find any viewer which triggers the HDR mode when opening a TIFF file (although the abovementioned hack works). The maximum compression level saves about 25%.

I exported to 16-bit linear and used the zscale filter (see example in that thread) to do the transfer function conversion.

Note that as of 1-2 years ago, only software encoding in x265 could set the ATC SEI flag required to have a TV trigger HLG mode properly. Don’t know if that’s supported yet for NVENC or VAAPI yet. I had a patch that hacked it in for VAAPI. Distro-included versions of ffmpeg will likely lack the zscale filter needed for this.

Exporting HLG from darktable and then transcoding might also work. Honestly the next time I do it I’m probably going to do a Rec.2020 TIFF and just throw it into Resolve so I can preview it in realtime - Blackmagic’s Decklink/Ultrastudio product lines are one of the only ways to do 10-bit HDMI or SDI output in Linux at the moment.

1 Like

So, I managed to produce a video file encoded in H.265. It is indeed quite space-efficient (15-20× smaller than the TIFF file at the same resolution).

Opening it in VLC, the colors look good, however it is not rendered as HDR and it clearly lacks contrast. It seems that VLC is tone-mapping it by simply reducing its contrast until it fits the SDR range.

Using IINA, the contrast is good, but the video is dim. It seems to uniformly scale down the luminance values. So this basically look like the TIFF.

I also tried the proprietary viewer Infuse 7, but the result looks just as bad as VLC.

And of course, QuickTime being QuickTime, it simply won’t play H.265…
Actually, QuickTime can play the file if you pass the option -tag:v hvc1 during encoding. It looks like VLC and Infuse 7, but brighter (still not above the max SDR luminance).

I might try to play a bit with the ffmpeg command line options to see if I can improve the situation, but I don’t have much hope.

So it seems we have traded the problem of displaying HDR stills into finding a video player which can successfully play H.265 in HDR :sweat_smile: Maybe other codecs would work better.

I am now starting to think that pushing for JPEG XL support might not be the least reasonable option.

Don’t want to sound pessimistic, but since we’re not there yet with the HEIF/AVIF ecosystem after several years and investment by big players, what makes you think JXL uptake and (good) SW implementation of both content creation apps and viewers is going to be quicker/better?

I have a few gripes, at least when it comes to JXL support in darktable:

  1. Library API is not feature complete yet; I don’t think we’ll keep developing this until that happens (but it should be soon from the looks of it)
  2. The encoding options seem overly complex and incomprehensible to me, so there is a real risk of not getting it right (any help deciphering and testing those is welcome; the overall complexity of the codec makes me think it’ll take longer for its uptake, not quicker than, say, AVIF)
  3. There seems to be quite a bit of rendering/tone mapping responsibility assumed by the codec, which, to me, seems a bit strange (a storage format IMHO should get out of the way and leave rendering to apps); hope it can be disabled/skipped…
2 Likes

I’ve kind of hinted at this, but to be clear:

Anything that is not a dedicated video playback device (Smart TV, Chromecast, Android TV, etc) is likely to be problematic. However, the majority of HDR displays in existence are Smart TVs with H.265 support. (Some lack HLG support, HDR10 is the lowest common denominator that is supported by nearly everything, but requires a lot more metadata in the file about frame average luminance, etc.)

In my case:
USB to Vizio P65-F1
Google Chromecast using LocalCast on Android or mkchromecast on Linux
Built-in YouTube app on the TV or CCGTV
(Caveat: While the Google Cast API does HDR, and YouTube does HDR, you get 1080p SDR if you attempt to combine them…)
(Caveat 2: YouTube states nothing about uploading CRF/CQ video being a bad thing, but CRF/CQ video seems to randomly cause YouTube to refuse to deliver an HDR version of a video - it’ll recognize it as HDR, it’ll tonemap it down, but the HDR version will never become available. VBR seems to be necessary.)

H.265 + PQ is the most compatible/widely supported format, but it still has a massive pile of caveats. H.265 + HLG is probably the second most compatible/widely supported - but still, lots of caveats. It took Google over a year to fix HLG support in CCGTV - it would tell the display it was HDR10 and send HLG data, which would behave similiarly poorly to a widely known bug of the device getting stuck sending an HDR10 flag when dropping back into SDR mode on occasion.

1 Like

Even DWA compression? DWA is lossy but with a user settable quality factor. I don’t know how it is in terms of compatibility…but that should give you good compression deliverables.

1 Like

DWA compression does not seem to be exposed in DT (3.8.0)

image

1 Like

I am not very familiar with the technical details, so I will trust you on that. Hopefully the API issues will be resolved once the reference implementation is officially published.

As far as uptake is concerned, it seemed to me that there was strong interest from some big players like Adobe and Facebook. In particular, the lossless JPEG transcoding (and the associated reduction in bandwidth) could be a big driver for adoption by social media platforms, especially since there is no need to wait until all browsers support JXL for the benefits to start to appear (e.g. serve the JXL file if the browser supports it, otherwise fall back to JPEG). Of course this does not guarantee that the HDR functionality will ever be exposed…

I’d be happy with AVIF, too, if it sees broader adoption :slight_smile:

Regarding HEIF, my understanding is that uptake was hindered by potential patent issues. Please correct me if I am wrong.

I’ll have a look at the issue you linked to. Not sure I can help much on the Darktable side. However, if I manage to find or produce a “reference” JXL file of an HDR image, I might play a bit with the JXLook viewer to see if I can get it to render correctly (the code did not seem to be overly complicated).

Oh, I see. I have to admit that I don’t own a TV, and that almost all of the non-still pictures I watch come from YouTube :sweat_smile: So I did not realize the support was that bad on non-dedicated devices.

I don’t know what you mean here — Windows 10 and OS X have supported HEIF for years now, also Ubuntu 20.04 (LTS), and Android since 8. This means that if you updated your device within the last 2 years (and you should do that more often, for security), it supports HEIF.

I expect that something similar will happen to JXL once it is matures a bit. Early adoption will mean that you will face a few hurdles, but that’s all, sooner or later everyone will be able to read it.

Will camera manufacturers get their act together and adopt one of these formats to replace JPEG? That’s hard to say, and may not be relevant: people who edit their images in any nontrivial way (beyond cropping and “artistic” filters for Instagram) may already be using RAW, and JPEG is certainly good enough for social media etc.

See the links to other forum posts - there are still plenty of apps that render HEIF/AVIF content in the wrong way. We wouldn’t be talking about this if it was as well implemented/hashed out as JPEG (which to be fair, is less complex in terms of color management features)…

1 Like