Yes, use darktable.
As pointed out several posts ago, you never see the raw information in an imageviewer. At least some basic processing is done - in most cases with an ‚one size fit‘s all‘ approach.
Endless discussion about different colourscience of different manufacturers and which ist better is an indicator, that there are a lot of decisions to make between capturing light on a sensor, store it and process it to achieve the intended look. Most manufactures uses different picture styles to cope with different users expectations.
So to enable IrfanView to display the raw files like your cameras manufacturers jpg they have to implement all theses decisions your camera manufacturers jpg engine made …
Yes, use darktable.
The jpgs look unnatural to me (esp the ones out of my phone) with all the denoising. The dngs have a higher bit depth too, isn’t it? They look more life-like, esp at 100%.
This is a common experience when people first encounter Darktable (DT) and free software RAW development. Here’s my own post about it, from almost exactly a year ago:
In my case, the RAW development was very far from the JPG one. It turned out there were all sorts of issues with the shots, partly due to LED lights giving out of gamut colors which confused even the in-body JPEG rendering…
One of the problems we’re facing, if I understand this correctly, is that camera manufacturers (Nikon, Canon, etc) collaborate with Adobe to have their JPG development process reproduced in Lightroom (LR) more faithfully. This means the RAW development LR users first see is often close to the out of the camera (OOC) JPEG, because proprietary development algorithms from the camera are reproduced in LR. I found this particularly striking when working on pictures from my Fujifilm X-T2 camera.
An entire play RAW thread discussed the finer details of how that bridge (and its environment) could be rendered:
In the end, I think the experience of free software RAW processing is that you are not trying to reproduce the JPG or LR development. I understand you are not aiming to do that, but the reality is that development is a personal, maybe even artistic process that will yield vastly different results depending on your objectives and (in my case, lack of) skill.
The way I browse RAWs is through Darktable, and I made it explicitely use the JPG thumbnails. This makes sense to me because it’s how I saw the shots when I was taking them, it’s often the shot I want. It does mean that I sometimes have to go uphill to go back to that JPG, but it’s a small price to pay to not have Adobe and/or Microsoft crap on my computers.
I hope that helps…
I came to about the same conclusion, but from a different journey. Knowing from my computer science background that editing JPEGs rapidly degrades the image (well, all editing degrades the image, it just becomes noticeable faster with 8-bit JPEGs, a discussion for a different thread…), I set out to control my in-camera processing to produce JPEGs that didn’t require any editing other than maybe a crop. Long-story short, I was not successful to my satisfaction, so I worked to make a raw workflow that met my needs. Another long-story short, at that I was successful, but my particular path involved writing my own software.
Now, I don’t think you need to go that far. The mainstream FOSS tools provide sufficient control and flexibility to manage any raw workflow I can envision, including my own. But, more importantly, they allow you to manage your images on your own terms, not some arbitrary default that a camera manufacturer or Adobe figured out. Even then, RawTherapee now has a histogram-matching tool that will make your raw development look substantially close to the JPEG that the camera embedded in your raw file, so to an extent you can compare.
Your camera and Adobe essentially ‘held your hand’ in making an image that, really, just keeps you from asking them a bunch of questions like “Why is my image dark?” “Why doesn’t my image ‘pop’?” and such. The FOSS tools are much less about that, and more about providing you control in how your images are rendered.
If you want the darktable results looks similiar to out of cam jpgs, theres a darktable way to achieve it: https://pixls.us/articles/profiling-a-camera-with-darktable-chart/
I believe someone did that already for the Fujifilm camera: they are “styles” that reflect the different film emulation… Unfortunately, the results, while close in terms of colors, do not really match the other parameters (lighting, contrast, etc)…
It doesn’t seem this is the only variable. IrfanView as seen in my previous post shows wrong colors (though exposure is correct).
Why does darktable but not IrfanView get the colors right? (I’m not talking about exposure here).
Have you tried digiKam on Windows and Snapseed on Android ?
Your DNG doesnt have a colour space tagged in the EXIF (at the time of capture), so Irfanview doesnt know what colour space to show it in. Darktable is assigning it one when it shows the DNG for editing (as does ACR, Lightroom, and Rawtherapee). I get purple skies in the DNG you have shared with Irfanview and FastoneImage Viewer and I guess I would get the same with the purple hut roofs in your second picture too. It’s probably more technical than this, but its a colour space issue imo.
The metadata and transforms are there. It is just that different apps read and process tags differently. Even the humble JPG looks different on different apps one way or another, unless you are a real pro at this stuff. Although I use Irfanview, it isn’t the best app for colour management, just as Windows isn’t the best OS for it.
color_management tags to topic.