[PlayRaw] is this DNG ok?

Hi.
I replaced my broken mobile by a new one capable of saving dng files (Motorola G7 Play).
So far, I find it very strange how colors are washed out, even after applying some tools (in this case, in darktable). I already uploaded two raw files to raw.pixls.us and hopefully something will change regarding colors.
Only when I switch embedded matrix by linear Rec 2020 RGB in input color profile do colors show up, more a less the way they should.
Here’s my take on it.


IMG_20191114_093358888_01.dng.xmp (14.0 KB) (dt 3.0)

EDIT: For the sake of comparison, below is a shot I took today in hdr mode (automatic, no raw). I confirm these color rendition is very close to reality (what my eyes saw), but still a bit undersaturated, so the closest point lies somewhere between my rendition and this one.

As a side note regarding the motif, the old pinky building at right (pun intended) is a police station where dissident people were tortured and killed during the military dictatorship in the sixties and seventies. Communists, they said.

@rawfiner I couldn’t get rid of the purple shadows by using the bias slider. (the door at the right corner, the fence at left, and some purple blotches on the police station door)

IMG_20191114_093358888.dng (25.1 MB)

This file is licensed Creative Commons, By-Attribution, Share-Alike.

2 Likes

RawTherapee 5.7-232-gc2ed31991: IMG_20191114_093358888.jpg.out.pp3 (11.5 KB)

2 Likes

Yes, the colors were a bit strange. I tried linear Rec 2020 RGB but I found the embedded matix easier to handle. Moreover the geometry has got some glitches. Some vertical lines move to the left, some to the right and some in the modern building even are curved.
Don’t expect too much from a small smartphone sensor and a small plastic lens in front of it.


IMG_20191114_093358888.dng.xmp (15.4 KB, darktable 2.6.2)

I think that the dng is quite fine. The purple patches on the Chevy car are not very distracting and might be due to reflection of the mural on the wall. Here is my attempt using RT 5.7. I uploaded un-resized version but exported at 95% quality.

IMG_20191114_093358888-1.jpg.out.pp3 (11.9 KB)

PS: I have a moto phone too (Moto X4) and I never use the moto camera. I use either the Open Camera app or some modified version of GCam 6.2.030 downloaded from XDA forum. If your phone gets Android 10 update, you can use GCam 7.2 too.

1 Like

RT 5.7
I didn’t notice particular issues with the colors.

IMG_20191114_093358888.jpg.out.pp3 (11,8 KB)

1 Like

I don’t :slightly_smiling_face:. Still, I’m surprised to know that the lens are plastic. Are you sure?

What I meant is that the exif information seems to be bugged and it may explain the color issue, I think. Isn’t exif information important in the first steps of the pixel pipe?
image
At the right, ooc jpg

Any particular reason? I tried open camera, but I think the bundled one is far more usable, specially in manual mode, where you just need your thumb to switch any camera settings. Are the results better?

Well, for one, you can focus at a fixed distance and forget about focusing then on. I do that for flower photos wherein, I move forward or backwards to get a particular part in focus…

May be, the exif is not broken in that app?
But usually I use the G Cam mod. Try it out.

Yes,the colors appear a little weak. But your version is quiite nice @gadolf.

IMG_20191114_093358888_01.dng.xmp (15.8 KB)

I like the race cyclist. IS the a bike shop?

1 Like

Man! I walk here every day on my way to work and never ever noticed that bike hanging on the wall :astonished:

It seems to be some kind of small shopping gallery and what I know is that everytime I pass in front of the entrance a guy asks me if I want to shave and cut my hair. I must have a very sloppy look.

2 Likes

There are mainly two issues with that dng.

  1. it has vertical banding (which one can fight with RT line noise filter set to 90 and mode vertical)
  2. it looks like some details are washed out by in camera processing (look at the face in the screenshot)

1 Like

I couldn’t figure out what they were trying to achieve with that processing. I wonder if a different app like Open Camera could save it without the processing.

Some more zombie faces…

1 Like

I’m not sure if I got @heckflosse’s point?

Are you saying that this dng has already been post processed? Is that the way dng’s work?

When I look at the zombies and vertical strips, I think only of a sensor limitation/quality. Isn’t that what is it about?

My guess is subject movement combined with an align-and-merge flow (see Google’s HDR+ white paper) was used here.

Not sure what align-and-merge algorithm Moto uses

@gadolf - Some phones save the intermediary output of a burst align-and-stack. Think of it as being similar to HDRMerge - which is usually beneficial but can sometimes fail.

See HDR+ Pipeline and HACK: Save aligned/merged Bayer image to TIFF file · Entropy512/hdr-plus@135273b · GitHub for an example of how to save out the aligned/merged data prior to demosaic, white balance, and tonemap.

3 Likes

Very interesting information.

The only thing that surprises me is that they would be applying this algo to the dng, since I chose to shoot in manual mode, where there isn’t an hdr option available. The result dng suggests that no hdr processing was applied, given the deep shadows.
In automatic mode, the hdr option is there, and I edited my first post and included an hdr from the same scene.

As to deep shadows - I think you’re confusing tonemapping with merging.

Merging approaches such as HDRMerge aim to create a DNG with reduced noise compared to a single exposure - it’s still mosaiced and in the camera color space, and it’s fully linear if done properly.

Tonemapping compresses the dynamic range of the image to fit it into a typical low dynamic range display. Usually by both compressing the highlights and lifting the shadows. You can tonemap a single exposure, but the raised shadows may be too noisy. Aligning/merging/stacking allows you to lift the shadows much more.

https://hdrplusdata.org/hdrplus.pdf describes, for example, the Google HDR+ pipeline. In that pipeline, JPEGs are generated by the full pipeline described, while DNGs that are output are only after the stack merging step.

Of course it could just be a low-quality lens… Other phone manufacturers are not as forthcoming about their pipeline implementations as Google is.

1 Like

The problem with the DNG format is that it could contain either raw or processed data, one never knows. When a DSLR generates a DNG (as sole option or as an alternative raw format to the camera manufacturer’s proprietary raw) we can be pretty sure it is unadulterated raw data. With a phone I suspect the DNG format is just thrown in as a bonus and to suggest camera capabilities which simply do not exist in hardware but are merely suggested by software.

I generally use my smartphone as a camera only in the rare cases I do not carry a real camera and I treat DNG as “Do Not Go” unless it is true camera raw and the only option. In my Ricoh GR I’ll use DNG where in my Pentax bodies I’ll use PEF. On my phone (Xiaomi) I do not see any benefit to what it calls raw DNG.

You can’t always get what you want…

I see what both @Entropy512 and @Mike_Bing mean, a dng can wrap anything, a true raw or a post processed image.

Still, I don’t believe that in this specific dng any post processing has been done, specially something that resembles hdr processing, as @Entropy512 suggested . Just because the image speaks by itself: the sky is well exposed and the rest of the image is underexposed, exactly how it should be since I used manual mode and metered light from the sky. The corresponding ooc jpeg is in par with that.
Besides, I added to my first post a second image from the same scene taken with the same camera, but this time in automatic, hdr mode, and the result is completely different from the dng (before any further editing), which I understand eliminates the possibility that this or any kind of hdr workflow has also been applied to the dng.
Anyway, I really appreciate your comments.

If the sky is well exposed and the rest of the image is underexposed - that means nothing as far as whether any sort of multiexposure merging happened. This is exactly what you’ll get, for example, if you use HDRMerge and open a resulting DNG without tonemapping. The key indicator will be whether you see the noise floor drop significantly between the HDR mode and non-HDR.

“HDR” is sadly referred to mean a lot of things, including tonemapping approaches to compress HDR down to an LDR display. HDRMerge and the merge-and-align phase of Google’s HDR+ workflow do NOT do this. They only serve to reduce shadow noise - and if you open the result in any RAW processing software, the behavior you describe is EXACTLY what is expected. The difference between that and a single shot happens when you try to tonemap - if you tonemap a single-shot image from a phone, you’re likely going to see serious shadow noise. If you tonemap the result of HDRMerge or an align/stack/merge algorithm, you’ll see far less shadow noise when the tonemapping algorithm lifts the shadows.

For me, “HDR” means anything that doesn’t fit into what an sRGB display can represent. Which is admittedly a very wide category, which is why I try to clarify which aspect of HDR I’m talking about when discussing it (tonemapping vs. merging). I’ve seen at least one purported expert in the field fail to understand that someone was talking about tonemapping and assuming that someone was trying to do a merge operation - these are completely different things.

Extra confusion (even for purported experts) arises from the fact that, at least for the most well-documented phone HDR pipeline (the 2016 Google HDR+ paper), the paper describes multiple pipeline steps - a JPEG goes through all of them (including a tonemapping phase), but a DNG only goes up to the align-and-merge step of the pipeline and the results at that point are saved out.

It may be that the Moto is, even in manual mode, doing the align-and-merge phase.

A key might be: If you save a DNG in “manual” mode, and a DNG in the “HDR” mode - do you see significant elevation in the noise level of the manual mode one?

I can’t, hdr only shows in automatic mode, where I can’t save a dng.

Auto:

Manual:

Btw, I must say that this manual mode gui is very good and easy to use, all you need is to swipe your right thumb on the setting you wish to change, while keeping the framing. And the live histogram really helps.