Converting S1H raw files to V-Log gamma

Forgot to say that the standard profile is on the left and the vlog is on the right.

Ahahah that was my fault indeed.

Let me explain, when we look this photo for example

It is a photo with v-gamut and v-log “gamma”, it doesn’t have an icc profile attached so generally a photo viewer assign the srgb profile to it.
This is it.

Effectively there are only three step for the raw to vgamut/vlog conversion:

  1. conversion to v-gamut linear
  2. exposure compensation (+2.68 EV?)
  3. linear to v-log formula

ta-dah

This doesn’t looks right, what really we have to do is to assign the srgb profile (we could do it in gimp for example)

2 Likes

Looks promising, but how are you achieving the first step, conversion to v-gamut linear?

Yeah, that’s the tricky part. V-Gamut itself is well defined with reference to XYZ values, but unless you also use the same camera native to XYZ matrix that Is used for the in camera V-log transform the results will be different. It’s especially hard since each camera will have its own transform to V-log depending on the sensor characteristics, and the transform most likely also varies a bit with the in camera white balance setting (due to chromatic adaption transform) and maybe even overall scene luminance, etc. really, Panasonic could be doing anything to between the raw and V-log states.

Perhaps the manufacturer’s matrix is embedded in the RAW metadata?

Edit: Then again, maybe the V-Log matrix is the camera native to XYZ (D65) matrix? Maybe try making a profile with V-gamut primaries and assigning it to the untagged linear tiff?

1 Like

That makes sense and it’s probably not revealed because Panasonic doesn’t want people to know it for various reasons or they simply don’t have any interest or in sharing such a thing. But I am wondering if the Varicam, which may use similar tech, because the s1h based on my testing has extremely similar output and the tech trickle down that companies like Panasonic probably employ, if the metadata embedded in the DNG I shared from that camera has those special codes and could be ported to the s1h. Or, at least, decoded to figure out the matrix.

Also, age’s latest result looks pretty similar, but not sure without being able to test across a wide variety of images. If he can outline his workflow, to give a noob like me a way to test, I can confirm wether that method could work.

1 Like

The problem is that I have used Photoflow and Art.

It’s not necessary to have native support for every log color space workflow but then there are some modules/steps that become necessary.

  1. correct decoding of the raw file
  2. expression parser, so that a mathematical formula is evaluated for each pixel (only photoflow for now)

https://gmic.eu/reference/mathematical_expressions.html


This is iseful for the linear to log conversion but it could be used for the gamut conversion too because in the end it’s a color matrix so a channel mixer with enough precision could be used too

https://www.colour-science.org:8010/apps/rgb_colourspace_transformation_matrix

  1. 3dlut module
  2. the capacity to assign and not convert a profile at the end (only photoflow)
2 Likes

Hmm… That’s odd. What profile WAS assigned to the image? Normally “no profile” is assumed to be sRGB? did Photoflow assign an "unusual profile transfer function due to using expression parser?

Parents have left, I should be able to add V-Log and V-Gamut to my ICC profile hackery and at least clean it up to be pushable tonight.

Some good points as to the camera’s raw/native->V-Gamut conversions - Since the color profile for almost any camera in FOSS (or even proprietary software like Adobe) comes from testing/profiling/reverse engineering the camera and not from the manufacturer itself, you’ll never be able to match the behavior EXACTLY in a bit-precise fashion - but you’ll be able to get close and in fact might get better results if you have a good robust profile. But at the cost of performance - for example having to batch-script RawTherapee to do the raw->V-Log conversion prior to injesting the preprocessed V-Log images into Resolve.

A fully hardware accelerated workflow in Resolve could get a bit more difficult. I’m still learning my Resolve-isms, including things like “why do I see what appears to be a black offset for S-Log2 video if my timeline transfer function is PQ, but not HLG?”

As a side note, the typical “EV adjustments” referenced here should no longer be necessary if you take the documented transfer function and normalize it so that the maximum code value translates to linear light of 1 and not something greater than 1. (If you’re mucking with ICC profiles, I’ve found that this eliminates compatibility issues where some software will cap data at 1, although that may partly be due to me mis-setting some flag somewhere in the profile.)

1 Like

For the three images @modernhuman provided in post 39, I opened each in rawproc and applied absolutely no processing other than converting the unsigned 16-bit integers to 0.0-1.0 float. Here are the stats for each:

  1. _LAS1302_standard-100.RW2:

     channels:
     rmin:     0.007706	rmax:     0.249298
     g1min:    0.007812	g1max:    0.249298
     g2min:    0.007782	g2max:    0.249298
     bmin:     0.007660	bmax:     0.249298
     channel means:
     rmean:     0.014213
     g1mean:    0.019342
     g2mean:    0.016874
     bmean:     0.019329
    
  2. _LAS1301_standard-640.RW2:

     channels:
     rmin:     0.007797	rmax:     0.249298
     g1min:    0.008026	g1max:    0.249298
     g2min:    0.007996	g2max:    0.249298
     bmin:     0.007950	bmax:     0.249298
     channel means:
     rmean:     0.024721
     g1mean:    0.037421
     g2mean:    0.029903
     bmean:     0.037404
    
  3. PLAS1300_vlog-640.RW2:

     channels:
     rmin:     0.007721	rmax:     0.249298
     g1min:    0.007767	g1max:    0.249298
     g2min:    0.007797	g2max:    0.249298
     bmin:     0.007751	bmax:     0.249298
     channel means:
     rmean:     0.014146
     g1mean:    0.019277
     g2mean:    0.016822
     bmean:     0.019265
    

The ‘standard-100’ and ‘vlog-640’ are essentially the same exposure, although the metadata tag for ISOSpeedRatings in the ‘vlog-640’ has a value of 640. The ‘standard-640’ shows channel means that are different enough in the proper direction from the other two files to indicate its reduced exposure due to the higher ISO for the same shutter speed and (?) aperture.

FWIW…

2 Likes

These are the exact behaviors I’d expect given how ISO is standardized. It’s also exactly how Sony implements it too, except that it’s 8x instead of 6.4x (due to S-Log2 having a different transfer function. S-Log3 should have an even greater delta, but at least in every Sony ILCE-family camera I’ve tested, Sony prescales the raw data so that sensor clip is mapped to a lower maximum code value in such a way that a code value of 128 for 8-bit winds up assigned to the same percentage of full sensor scale as for S-Log2.)

Based solely on the transfer functions and an assumption that the maximum possible code value is associated with sensor clip (for example, if the maximum possible code value maps to a linear light value of 12.0, map sensor clip to this instead of to 1.0), one would expect S-Log2 to have a 2.4EV delta from sRGB, but it’s 3EV instead. As I’ve mentioned before, I haven’t had a chance to retest/reanalyze to determine if Sony is actually running the sensor at a slightly higher gain (0.6EV additional gain beyond what’s reported in the maker-specific “Sony ISO” tag), or if that 0.6EV is due to the impacts of Sony’s s-curve applied to JPEGs before the sRGB transfer function.

Hoping to have time tonight to implement V-Log in my analysis tools and ICC hax.

Edit: @modernhuman - In the event that one of your cameras might throw things off by performing prescaling hackery, and to avoid potential “legal vs data range” luma issues, can you get a very short H.264 or H.265 clip at minimum ISO of:
A fast lens, aperture wide open, slow shutter speed, pointed at a brightly lit flat surface (e.g. slam the sensor as far past clipping as you can to determine if Panasonic might be prescaling sensor clip to something other than the maximum code value)
Lenscap on, shutter speed as fast as possibe (to determine where black level is landing and determine if there’s any legal-vs-data luma range funkiness at this end.

For example, Sony flags their videos as full/data luma range because they use code values all the way up to 255, but uses formulas that correspond to legal luma range, with the end result being that black lands at a code value of around 23 for 8-bit video, which is consistent with 16 + 6.57, with 6.57 coming from 0.03 * 219. If it were actually full-luma-range video, black would land at 0.03 * 255.0 = 7.65 - the fact that this is not an integer is a fun issue I haven’t yet figured out how to handle… Does Sony fudge this up to 8 (or 23) or what? What does Panasonic do??? (In the case of Sony - my old A6500 is landing black at 25!!! :frowning: This has caused me significant headache/pain in shadow handling with Resolve!!!)

1 Like

Actually it looks like there are two consecutive EV adjustments.

  1. +2.678 EV, maps middle gray to 0.18
  2. +1.15 EV (it could be related and better handled here https://github.com/colour-science/colour/issues/343)
  3. lin to log formula
    result

out of camera

legal vs luma hackery like that (and yeah there’s also that weird 0.9 factor sometimes seen) is exactly why I asked for fullclip and black samples)

Also I forgot to mention it in my last post, the reason for asking for video samples is because sometimes luma scaling is different for JPEGs. (e.g. Sony flags videos as full but it’s more “pseudo-legal” with black at 22.57, but JPEGs are proper full-range and put black at 7.65 instead of 22.57

1 Like

Most likely we couldn’t have the problem of wrongly interpreted yuv footage because we start from the raw

Out of my league here but the tool here seems like it could account for almost any conversion. THe SH1 is not listed as an option but it looks like you could take a linear raw to vlog with many possible modifications…so if might be possible to get your lut with some experimentation??
https://cameramanben.github.io/LUTCalc/LUTCalc/index.html

1 Like

Ok, so there are going to be a lot of files. The file names hopefully explain the files pretty well. But I’m uploading, from the s1h, raw and simultaneous jpg, internally recorded video all-I 400mbit h264, and converted jpg to h264 in Davinci Resolve with a rec709 output in 100 standard, 640 standard and 640 vlog, both over exposed “white” clip and underexposed “black” clip variants of all. Then for the Varicam it’s internally recorded all-I 400 mbit .mxf (h.264), raw dng stills, and converted dng to v-log output as h.264 rec709. In black and white clip variants. I hope that covers it.

Just a note, the link that I provided much earlier outlining the V-Log and V-Gamut shows that the black clip is 128 and the white clip is 911, I believe, which should be apparent in all of the v-log encoded files. Also, all files are running at data level. I checked and when flipped to video it expanded them into the wrong numbers. So data level is correct for these files.

This first post is the stills, the next post will be the video files.

s1h_black_standard_100.RW2 (33.9 MB) s1h_black_standard_640.RW2 (33.9 MB) s1h_black_vlog_640.RW2 (34.0 MB) s1h_white_standard_100.RW2 (33.9 MB) s1h_white_standard_640.RW2 (33.9 MB) s1h_white_vlog_640.RW2 (33.9 MB) varicam_10bit_raw_black_800.DNG (10.5 MB) varicam_10bit_raw_white_800.DNG (10.5 MB)

Part one of the video files are below and outlined in the previous post. However, this site wouldn’t let me upload .mov or .MXF, so you will have to change the extensions. Every file is a .mov, except the the .MXF files which have been labeled in the file as such. Might take me a bit to convert all to under 100MB and correct file types. But here are the first few. If you have any questions feel free to ask. Thanks again.

and the internal s1h black vlog 640

and finally the Varicam internal vlog 800 .MXF

@priort I have used LUTcalc before, I never got quite the results I was hoping from it. That’s part of why I recently started with a method for generating HALD CLUTs using icc profile hackery.

One limitation is that it only generates cube LUTs up to 65x65x65 - which could have inaccuracies compared to doing proper mathematical transforms with float32 internal, and/or at least doing a shaper pre-LUT

Speaking of that:
I’ve pushed my current hackery up to https://github.com/Entropy512/elles_icc_profiles/commit/57ed2591ad2c0653ab34550c5260b0b0618f0b1d - will work on adding V-Log/V-Gamut support next

It’s dirty and I’m conflicted in naming conventions between “Give Elle credit” and “These are such quick hacks that they’d probably be horrified” (I know that they left these forums for various reasons, and it sounds like being nitpicky about technical inaccuracies might be one of them?)

Despite that - assigning the S-Log2 profiles to an image exported from a Sony S-Log2 clip using ffmpeg, and then adjusting the result in RawTherapee, has given me by far the best results as far as conversion LUTs so far. (In my case, I edit the image to taste, then copy the processing profile operations to a level 16 aka 256x256x256 HALD CLUT that has been tagged with the same profile, basically allowing me to apply it to 8-bit source video with no interpolation at all. Unfortunately Resolve doesn’t support HALD CLUTs, so I need to come up with a conversion tool that generates an appropriate shaper LUT and then a follow-on cube LUT.

Also hopefully we didn’t break the forums too badly with lots of large files… Google Drive might have been better for this particular scenario, oh well. Working on downloading everything. :slight_smile:

Edit: The MXFs make the forums very unhappy, they’re not downloadable. So definitely Drive for them. I THINK I got everything else, although there’s a concern the forums might have mangled or altered the video.

1 Like

Thanks for the explanation and sharing your work/experience…I have no need for any of this now as I don’t own any hardware to generate these files but it is interesting to try and follow how things work…thanks again…

Looking at the scopes in Davinci Resolve, I can confirm that the clip points were 128 and 911 for every vlog file, except the standard captured jpgs (both 100 and 640) which expanded out to 0 and 1023. But the vlog encoded files are the only ones I care about, otherwise the whole looking through the LUT feature of the s1h is pointless or at least not realistic for work.