Converting S1H raw files to V-Log gamma

Yeah, that’s essentially what I meant, haha. As I understand it, it’s (roughly, because as you said, camera models differ but v-log is v-log):

  • subtract black
  • scale channels for white balance
  • linear to v-log, which includes “undoing” the black point subtraction to restore the camera’s actual non-linear response toe

When you compare v-log to Sony S-Log3 it’s interesting that v-log specifies a black point of .125 while s-log3 uses .09375. The .03125 difference between those is the same as the Sony camera’s linear raw black point, 512/16383. This from cameras that share the same (I think! :wink: ) sensor. Panasonic’s raw black point is 511 (green). 1/512 = .00195312. Maybe that’s where the linearization black point comes from? I honestly don’t know, just a guess.

Definitely possible. I know that Arri Raw footage is encoded as non de-bayered 12bit log.

If you want some extra pain:
For S-Log2, the formulas are designed assuming legal luma range. The video is tagged as being data range, but all of the formulas are clearly for legal. e.g. 0.0 = code value 64 (16 for 8-bit). Code values above 235/940 are allowed, and in the formulas translate to 1.09 = 255/1023 (depending on 8 or 10 bit). This is when in video mode. So the 0.03 offset in S-Log2 is within the legal luma range, so black winds up being 16+6.53 = 22.53 for 8-bit (6.53 = 0.03*219) This code value wastage really sucks for 8-bit cameras.

In photo mode, everything gets expanded, so 0.03*255 = 7.64 is the black level in the JPEG

What’s more, Sony’s S-Log3 whitepapers say “we always use the same luma range” but that’s a lie from every ILCE-family device’s JPEG engine I’ve characterized. Can’t say anything about the ILMEs (cine bodies). e.g. black winds up below 16 in JPEGs

Panasonic appears to not do this funky JPEG vs video mess.

I’m seeing a difference. All three raws have the same sensor exposure, 1/100sec at the same aperture (not recorded in metadata, reported by @modernhuman), but the vlog_640 raw does not have the ISO 640 gain applied; it has roughly the same histogram as the standard_100 raw, in spite of reporting a ISO 640 gain.

My head hurts… :crazy_face:

Maybe they mean the same scene luma range? In which case black ends up lower for cameras with a lower noise floor?

Why are you doing this multiplication?

I think to understand your theory
For raw to v-log you do this:

  1. raw decoding (0-1 range, well actually there could be real data above 1 due to white balance and/or highlights reconstruction )
  2. linear data *16.224
  3. linear to v-log formula

for v-log to linear :

  1. v-log to linear formula
  2. linear data / 16.224

why 16.224?
Because if we insert this value in the log formula we get

(0.241514*log10(16.224+0.00873)+0.598206)*1023=911

911 is the clipping level for this camera

So
v-log to linear for small values

  1. lin= (in - 0.125) / 5.6
  2. lin=lin/16.224

(0 - 0.125) / 5.6)/16.224)=-0.0014

Exactly! It might be this simple. But not really sure. You might also check the conversion code/module For vlog in the backend of SILKYPIX, which is free to download in one of my previous posts. Maybe some way to utilize that for the underlying conversion, in other raw still editors. But I don’t really know anything about coding / hacking.

Exactly what I’d expect. No different than an ISO100 image if you just look at the raw data. With Sony, there’s the “Sony ISO” tag which basically means “Sensor gain matching standard JPEG at this ISO”. (The actual CIPA standard for ISO is derived from JPEG behaviors including transfer function, it has no meaning for raw sensor configuration at all.) It’s no different than various cameras having “DRO” modes that set minimum ISO to 2x the normal - and guess what, it’s just intentionally underexposing the sensor to get more highlight headroom.

So we don’t really need to know how the image was shot. Just - we want V-Log+V-Gamut, so manually enable that if desired.

No, they were definitely talking about legal vs data ranges

Because the graph I had, now that I think about it (need to check) had sensor clip at 1023. So that multiplication compensates for the graph mapping sensor 1.0 to linear light of 46 (code value 1023) instead of linear light 16 (code value 911)

I had 16.12 instead of 16.224, but that may be the subtle difference of 3644/4095 for the code value instead of 911/1023. Fiddling with the calculations, I should probably be doing 9114+2 instead of 9114 - 911*4+3 overshoots slightly - but we’re talking fractions of a percent now

That was the intent of generating an ICC profile for V-Gamut+V-Log - which is now delivering nearly identical results except for a saturation delta (possibly due to a difference in the color matrix @ggbutcher derived vs. the one used by your camera internally). In theory, I could now generate a V-Log still from a Sony ARW image. :slight_smile: (@ggbutcher - this is why I think that automatically enabling V-Log shenanigans is undesirable.)

I haven’t tried one of the Adobe DCPs yet.

I think that color saturation difference occurs in SILKYPIX too. But it’s definitely not just saturation and changes colors slightly, so yes a matrix delta. I also notice that the vlog video clips and the Jpgs have slight differences too. So there might be some shifts do to compression and the algorithms that save them. Not really sure. But with the Varicam the images are essentially identical across internal and raw transformed in Davinci resolve, aside from compression artifacts and low level detail. Which is to be expected.

My guess is the Varicam was thought through a bit more and geared toward a professional workflow, whereas most people using an s1h are not looking at these details as much and wouldn’t notice. But those differences in a final grade don’t matter too much to me. But it is nice to know that color shifts are as minimal and consistent as possible so the whole workflow can follow from evf to final image with (mostly) no surprises.

I followed the above in RT, selected Neutral profile as a starting point, added Glenn’s input DCP and your Output ICC and my photo doesn’t turn into what you got. Glenn’s DCP definitely affects the image (adds saturation), but when I add your VGamut-elle-V4-vlogs1h.icc as the output profile nothing changes and the image stays looking like a linear image. What am I doing wrong?

Maybe this file helps?

PLAS1300_vlog-640.tif.out.pp3 (11.3 KB)

One thing to note: The ICC-converted images will display normally ('normal" corresponding to an underexposed sensor) in a color managed application unless you strip the profile.

For example, regardless of output profile, RT will display the image using your display’s output color space/profile (usually sRGB and/or a calibrated/profiled variation). The histogram in RT is displayed in the output color space/profile - so when you changed the ICC profile you should have seen the histogram change.

The image saved will be V-Log data, but since it has an appropriate ICC profile attached, any ICC-aware color managed application will convert the image back into a “normal” color space under the hood. That was the reason for posting both ICC-attached and non-ICC-attached versions - those images have the EXACT same data but render vastly differently in browsers that are color managed or anything else that is.

Try one of the following:
Strip the ICC profile using exiftool: exiftool -icc_profile= image.tif (or image.jpg)
Load it in Gimp, choose “keep” when it asks you if you want to convert or keep, choose Keep then choose Image->Color Management->Discard Color Profile (also note that if you choose “keep” the histogram does not change)
Use ImageMagick’s “display” command
I’m pretty sure Resolve ignores any ICC profile so Resolve will probably treat it the same as V-Log video?

The DCP was made from the colorchecker in the DPReview studio comparison scene, so YMMV. It may be better to make a DCP using the dcraw primaries from libraw, but that would involve hand-editing JSON and some math to make the dcraw primaries D50 and floating point, won’t have time to explore that until tomorrow earliest.

1 Like

From what I recall, the studio scene is a high-CRI LED bulb, obviously not sunlight, so that could be part of the problem.

Optimal is, of course, actual colorchecker shots taken in noon sun and via a real tungsten bulb (the latter is a lot more difficult than it used to be but still possible in the US)

As a side note to @modernhuman - an alternative approach I’m looking into is writing some Python to convert raw images into DNG files that Resolve will treat appropriately (as in, enable V-Log output).

Might be a while, I spent some time sidetracked on writing an S-Log to linear DCTL transform because the built-in S-Log transform seems to have some black level issues, potentially due to Sony’s cameras being so inconsistent about luma ranges for JPEG vs video and potentially differences between the 8-bit ILCEs and the much higher end video-focused ILME family. Side benefit of writing the transform yourself is implementing a built-in exposure compensation slider. :slight_smile:

1 Like

@modernhuman I came across this little project you would have to navigate the the git page…almost seems like what you would like to do with your panasonic files just vlog…maybe you should try it for fun and see what happens Rawtoaces - Calling All “developer-types” - Tech/Engineering - Community - ACESCentral with an aces lut afterwards…

1 Like

This sure sounds like what you are experiencing trying to implement your vlog on the raw file… Help me understand Photography Raw - Discussions - Using ACES - Community - ACESCentral

1 Like

Yeah, although I will disagree with the “we have no clue what middle grey is” - middle grey will be at 0.18 for a camera that uses an unmodified sRGB tone curve.

Middle grey winding up elsewhere in sensor-normalized space is what happens when you use a different tone curve, and explains why ISO 640 for V-Log has the sensor operating in the same configuration as ISO 100 for “normal” photos.

The challenge is due to “middle grey” being defined in terms of a processed JPEG, and hence dependent on the JPEG (or in this case, video) tone curve. (The standard that defines ISO ratings for digital cameras defines it based on what becomes middle grey in a JPEG, which is why the JPEG tone curve settings can shift the ISO setting even when the sensor didn’t change configuration at all.)

I just got back from shooting two different jobs in different states, around 5000 total photos. I shot everything with the s1h in vlog to .rw2 raw files. I used a custom Viewing LUT while shooting. Then, in post, I imported all the raws into capture one 20 to make my selects. I moved all the selects to a different folder and renamed them. Then, I opened all the selects into SILKYPIX and applied my development preset and rendered them out as 16-bit vlog tiffs. I imported the tiffs into DaVinci resolve 17 (paid version) and put them into a 6k x 4k timeline. I applied my standard grade, based on my viewing LUT. The photos at this point are 90% of the way there and how they looked when I shot them. I adjusted gamma / density and white balance (using the white balance OFX plugin). Also, any localized adjustment nodes like vignettes, windows, etc. where applicable. This is all part of my standardized node tree. After all this I render them out at 8 bit tiffs. Where they can then be further processed into jpgs or whatever by being reimported into capture one or applicable software.

This process, though a lot more lengthy than anything I’ve done before, yields the most flexibility and ultimately authorship of the image. It is worth it to me. But it would be great if someone would make a DaVinci resolve, node based editor for photographers, that worked with the native raws and also had a library and metadata / tagging tools. The main problems with this method is size (raws and tiffs) and flexibility in tagging / naming and rendering. Having photos on a timeline is just not great for that. So anybody that can make that software will create what I think is a next gen tool for photographers. It’s certainly not for beginners, but the power and speed is unmatched. Also, the Panasonic S series is the only camera that can do this right now, with out custom tools. So if you want to try it your options are limited. But it’s pretty amazing and makes the other options feel primitive.

1 Like

thats a big ask for thousands and thousands of hours of development time…

1 Like

For sure, I’m just saying there’s a void in the photography post space that could be filled. I’m not expecting someone to make that for me. But, there’s a huge opportunity to have a unique product that stands out and maybe even become a standard. Just surprised it doesn’t exist…

Y’know, Blender might be a reasonable starting point:

https://docs.blender.org/manual/en/2.79/render/blender_render/materials/nodes/introduction.html

Haven’t looked to see if they ingest raws, but once that’s done…

1 Like