Converting S1H raw files to V-Log gamma

Also, just download the mxf files and add the extension .MFX to them and they work just fine. Just downloaded one and checked myself. And remember all the other files need to be changed to .MOV or they aren’t correct. No file was encoded with a .mp4 wrapper.

No problem. LUTCalc is well regarded, I’ve just found it not quite sufficient for my needs, but maybe I’m too picky? :slight_smile:

Initial analysis of the files @modernhuman provided, specificlaly “internal” S1H V-Log:
Everything is tagged as “full” luma and rec709 - which is pretty common for log profiles (e.g. user must know that it’s log and convert it, instead of tagging it with an appropriate profile in metadata)
A small kink - my first time working with a camera that actually output 10-bit, so ffmpeg COULD be mangling some stuff here. If I try to avoid causing quantization issues, gimp appears to be doing some gamma scaling. So for now I’ll output 8-bit JPEG from ffmpeg until I figure out a better analysis tool.
Black comes to 19/255
White is 243/255 - so above the legal range limit of 235. Black being above 16 hints that this is, similar to Sony, “pseudo-legal” - where 0.0 is mapped to 16 (or 64 for 10-bit), but values above 235/940 are permitted, so 255/1023 map to 1.09, not 1.0

Time to go through the white paper now.

Ah, just saw:

Hmm. So ffmpeg may be doing something strange here… 194 is not 128! Same for 2434

hmm…

OK, so even though the file is tagged as “full” luma range, ffmpeg is doing a legal->full expansion on it. From the white paper: 128-64 = 64. (64/876)*255 = 18.63 = exactly where I saw black in the JPEG.

Similarly: 255*(911-64)/876 = 246 - pretty close to the 243 I saw in the JPEGs.

Luma ranges are the bane of my existence. And my “usual” quick method for analyzing 8-bit videos apparently chokes BADLY on 10-bit videos.

Also, clip happening at 911 indicates that sensor data was prescaled so that sensor clip doesn’t map to 1023, but instead to something lower. Same way Sony has S-Log3 achieve the same minimum ISO as S-Log2… I’ll need to feed 911 and 1023 into the formula to see if this explains the anomaly @age saw

When in resolve if you open clip attributes and select ‘Auto’ or ‘Data’ levels you will get the correct levels. If you select ‘video’ it expands out and that’s probably where you are getting your 64, etc. The other levels like 194, I’m not sure, sounds like a mishandling of the file in the software. Remember, I’m looking at the level below the noise floor. If you are looking at the top of the noise floor it would be slightly higher, like around 140ish, depending on the channel. But the bottom of the noise floor (i.e. black clip) is 128 and that is, again confirmed in the Panasonic white paper and all of the vlog files, converted or internal. Hope that helps.

For this particular task, I’m not using resolve, I’m using ffmpeg. For 8-bit videos, ffmpeg does not do a luma range conversion when saving a JPEG frame, but for 10-bit videos, it appears that it does decide to ignore the flag that the video is full luma and expand it anyway… In general, luma range handling in FOSS video software is currently an utter disaster which is why I gave in and bought Resolve last week. It has its own issues, but overall it seems to me more of “learning curve” than “this software is widely documented to behave badly under certain conditions and the developers refuse to even acknowledge it”.

(Most of the people on these forums are doing stills work, and the various stills processing software like ART, PhotoFlow, darktable, and RawTherapee tends to be a lot more consistent as far as color management, although the whole video concept of “legal/broadcast range” is kind of alien to many here to give everyone credit.)

When plugging your code value of 911 for the clip point, I get a result of 1.5 EV instead of the 1.15 @age determined as far as where that code value lies in comparison to 1023. Add that 0.9 conversion factor seen in many places and it becomes 1.35, not 1.15. But there’s a possibility there may be some scaling factors the other way (potentially due to “standard” profile JPEG behaviors) that explain the remaining difference, just like S-Log2 has a predicted delta of 2.4 EV, but minISO is scaled by 3EV.

Which explains why the minISO for V-Log on the S1H is 640, when if you look at where a code value of 512 lies, the delta should be a lot more…

As a side note, another potential contributing factor to the deltas @age noticed is white balance.

Some systems will take R and B and multiply them by their appropriate scale factors, leaving G at 1.0 and as a result, potentially clipping R and B. Others will take the R/G/B white balance multipliers, and normalize them so that the highest mult is 1.0 and everything is scaled downwards. This can result in the post-WB exposure shifting by a factor of the largest multiplier depending on implementation. Then, AFTER all of this AND after matrix transforms, you might again have some clipping of the post-WB and post-transform data… Making things even more difficult as far as duplicating things EXACTLY.

Duplicating things but with a potential EV shift will be easier.

Getting closer… But this is the first time I’ve used generated ICC profiles for output. Something seems bogus in the gamut mapping. I’m able to get pretty close on the luminance histograms, but saturation/gamut is hosed (my output is severely desaturated compared to the camera JPEG). I need to do some more digging… As I mention, so far all of my work has been going the other way (from S-Log/S-Gamut to linear, and exporting in a more standardized colorspace). I need to do more digging when I’m more awake.

Edit: Bedtime. Something is really weird as far as gamut transforms from V-Gamut to/from anything else. I need to get some sleep and poke at this later.

I think the “white” should be way less than 1023,
only with a crazy high value in linear space is possible to reach that value after the log transform, so it should be correct.

It’s more important black and middle gray for the 3dlut

Makes sense for a manufacturer that is exclusively 10-bit, but it’s bad on a camera that records 8-bit only - too many wasted code values. (or: Why S-Log3 is an extremely bad idea on almost any Sony that records 8-bit video.)

What I can’t figure out is why I calculate a value of 1.5 EV and not the 1.15 you found - again, might be due to the normal JPEG having an S-Curve that moves things around a bit as far as how the “base” ISO is defined. 1.5EV puts the histogram pretty close to the JPEG, but again, I’m seeing some gamut behaviors I haven’t figured out yet and that may be throwing off the histogram a bit by shifting hues around and thus altering the luminance component.

I pushed up what I have so far to the repo I linked above with a mangled ICC profile generator, but as I said, I need to take a second look at some things.

Edit: Oh, looks like I can delete all media_whitepoint references since I’m generating only V4 profiles.

1 Like

So, I’ve been doing some investigation regarding incorporating LUTs in rawproc. Seems to be a simple thing, take your input value, look it up in the table, and use what you find. Ooooh no, the LUTs are rather sparse, so you have to interpolate between your input values and a corresponding output value. Sounds easy? NO! Interpolation is grounded in all sorts of complex math, especially to represent non-linear functions. Gah, should have taken more math in college… :crazy_face:

So far, I’ve tried a simple linear interpolation, and a spline-based interpolation using the Tino Kluge spline I use in my curve tool. And, what I’ve noticed with trying to emulate the camera log curves is that you need to make a rather large LUT to begin to look okay, and I have yet to keep the shadows from speckling. Compare that to the “cleanness” hard-code of the math function from the spec, and unless there’s a super-keen interpolation I should be using, I’m not going to get a LUT I’d be satisfied to use for this particular application, emulating camera log curves.

So, right now, I’m reconsidering how I’d incorporate a LUT tool in rawproc, and almost definitely not for applying camera log curves. Maybe I’m missing something; that’s why I’m posting this. Also, seems to be relevant to the original endeavor of the thread…

1 Like

Yeah, LUT interpolation is waaaay beyond my mathematical skill, but what I do know is that tetrahedral interpolation is generally regarded as the best method for 3D LUTs. For 1D LUT transforms like linear to V-Log it’s much better to use a 65536 point linearly interpolated curve.

1 Like

I know the ffmpeg source code is a good reference for tetrahedral interpolation.

The issues you raise are why I’ve generally not been a fan of cube LUTs and (other than the fact that I can’t tweak the transform function to handle camera-specific oddities) am preferring Resolve’s Color Space Transform plugin, although there’s a concept of a “shaper LUT” where a 1D LUT is applied per-channel before the 3D LUT to make the 3D LUT operate on a more perceptually linear space.

https://nick-shaw.github.io/cinematiccolor/luts-and-transforms.html#x37-1600004.4.3

HALD CLUTs have the advantage that they can leverage hardware interpolation/lookups on GPUs, and a level 16 (256x256x256) LUT is not excessively large (4096x4096 texture) and has a 1:1 mapping of input to lookup table entry for 8-bit video. I want to figure out a good way to transform a HALD CLUT into a smaller-dimensioned cube LUT along with a shaper LUT on the input to handle the “heavy lifting” of the log transform.

(As a side note, Ambarella-based cameras get pretty good results with a 16x16x16 cube LUT due to using a pre-shaper - Color correction and 3D LUT cube - GoPrawn action and dash cam forums )

Yeah, doing this I got the feeling there were reasons the camera manufacturers put the log transform in the camera, as a prep for the LUT work in non-linear space.

Since I want log curves with abandon in rawproc, I’ll probably end up with what @NateWeatherly describes, a rawproc-specific 1x65536 curve format…

Yes, indeed; just made a 65536-row LUT of the ARIB STD-B67 loggamma curve that I already implement parametrically in rawproc, and the outcome is much better with the simple linear interpolation. Ironically, the deep black shadows are more speckled in the parametric image, but that may be the inherent image noise that the lowest increment of the LUT is crushing…

1 Like

I made a test file for visualizing the results of 1D Luts and it’s been really helpful so I thought I’d share. Starting at sensor saturation I made 15 daylight exposures through an Exposdisc on my A7III, processed them to linear 16bit tifs, and composited a crop from the center of each into a grayscale step wedge. They are labeled in EV distance from the raw black point specified in the EXIF data, which is 512/16383 (i.e., 0.03125/1.0). Counting up from this black point places “zone 5” exactly in the middle of the camera’s log tonal range, at 7 (really 6 2/3 with white clipping) stops from full saturation and perfectly aligns with the middle gray reference in Sony’s S-Log3.

Untagged 16bit Linear TIF:
A73_EVsteps_16bit_Linear_labeled.tif (22.5 MB)

Synthetic DNG (linear, but already demosaiced and white balanced) so you can use it in RAW converters.
A73_EVsteps_16bit_Linear_labeled.dng (9.3 MB)

Anyway, a step pattern like this makes it really easy to use a waveform scope to plot the resulting transfer characteristics and see how the adjustments relate to the overall dynamic range of the sensor. I use 3D LUT creator (take screenshot of whatever app I’m using it in and drop that into 3D LUT Creator to view the scope), but Darktable also has a waveform (and Resolve, of course).

To build my LUTs I use an app called Lattice. Again, not free or open source, but as of right now I don’t think there’s anything out there that does what it can do for free. LUTCalc is great for some things, but it’s 1D LUTS are limited to 16384 resolution and for whatever reason I can never get the legal/extended/data/full range and clipping settings right. If anyone can make an open source “Lattice” you’ll be a hero! :slight_smile:

Some Examples using it to visualize the effect of a LUT:
Linear (no LUT)

Linear–>sRGB

Linear -->sRGB with Adobe Camera Raw default curve

Linear → V-Log without exposure scaling

Linear → V-Log with range scaled from 0.0-1.0 to 0.0-32.0

  • ISO 100*5=ISO 500, Sony’s “native” ISO because nominal ISO 640 is actually more like 500. Sony Venice uses same sensor as A7III (and also Panasonic S1and maybe S1H, though perhaps with different CFA) and lists its native ISO as 500

V-Log with Kodak 2393 Print Film Emulation LUT

Pure 14 bit Log curve expression: log2($x*16383+1)/14 (notice how you can see the noise expanding in the shadows)

3 Likes

You’re not the only one. That’s what drove me to ICC profiles, RawTherapee and HALD CLUTs.

Luma range shenanigans are the bane of my existence. I suspect it’s why I’m not quite matching the Panasonic JPEG with my approach yet and/or I have other bugs in my implementation I pushed to github. I’ve got some ideas I hopefully will have time to fiddle with tonight or tomorrow. I thought my problem was some sort of gamut transform problem, but I remembered - getting your black point wrong (easy when luma-range shenanigans are in play) will result in things being desaturated and I think that’s exactly what happened.

Definitely not even remotely the case. It MAY share with A9/A9II, but definitely not the A7M3, it’s definitely a stacked BSI sensor to achieve the kind of readout rates VENICE does.

Your idea of a “synthetic” DNG makes me think - why not go all-out and synthesize a linear DNG programmatically, instead of splicing crops from A7M3 data? That’s been on my list to do eventually for other purposes.

I want to figure out a good way to convert a high-level HALD CLUT into a lower-resolution cube LUT with an associated pre-shaper.

The more I’ve thought about it the more I think it will be next to impossible to exactly match the in-camera log-gamma image without resorting to a target based LUT. I’ve been reading some and it seems like in-camera transforms most likely use some kind of complex polynomial based matrices that can vary the transform for shadows, highlights etc, in addition to varying the matrices and chromatic adaption based on white balance, ISO, etc. That’s not to say it isn’t possible to get close enough, but like you said, I think it might take a matrix + high-red 1D curve + 3D LUT, with the final LUT being based on target calibration shots.

Ah, thanks for setting me straight. I guess I just assumed that they were probably the same since they came out around the same time, have the same resolution, back-side illumination, dual-native ISO settings, and both had new color filter characteristics compared to earlier models.

Here ya go! I got this one from a post by Iliah Borg a while back:
16stops_1d12.dng (41.7 MB)

I use that one a lot, but I wanted one made from actual exposures so that I could see how the sensor response affects the final curve. For instance, what if the transfer function supplied by the manufacturer is actually descriptive rather than prescriptive? I.E., maybe the V-Log transfer function is actually a description of the result of applying a simpler linear to log transform to the raw camera data. That’s what you’d need for integrating 100% linear CGI VFX with camera footage that has a natural toe due to the noise floor etc. In that case, using the specified V-log curve on raw camera images would lead to a raised black point and reduced saturation.

For instance, here’s a pure 16bit Log

Wouldn’t there be something (a formula) to find either in the raw developer tab in resolve and/or the exifdata in the Varicam DNGs? When the resolve raw developer “sees” a Varicam raw it gives it the option to transform. So perhaps there is code in there that tells what that transform is? And/Or inside the DNG itself there is something that says change me to VLog VGamut under these circumstances. Because otherwise the DNG behaves like a regular DNG in photo editors and resolve for that matter. The only difference is the exposure stays correct and isn’t off by 2.68

When I output the signal from my Varicam it’s going over SDI as a raw signal into the shogun and the signal looks dark and not at all like Vlog. Which tells me that the camera didn’t process it into vlog and kept it linear. Although, the Atomos requires, when shooting prores to convert from V-raw to v-log. So there is a lut, but that lut is more of a chimped lut and doesn’t quite look right. They also have a few different ones for different color temperatures. Which might be what you guys are talking about regarding the transform changing over white balance. But, again, that is only for prores, for raw you don’t do anything other than record. The signal is always raw coming out of the camera, it’s only on the shogun that you decide what format to record.

Just thought there might be a simpler way to do this and it’s hiding right there in the code.

It’s an enormously complex thing, and there’s a lot of aspects of it that merit digging - it happens that I think a lot of people here want to have a complete understanding of what’s going on not just for your immediate needs, but also to know “hey we’ve seen something similar before and we know X happens”.

I still haven’t had a chance to take a look at your Atomos DNG, the fact that it has a LinearizationTable tag makes it quite unusual - as in the first time I’ve seen it in use even though I knew it existed. A small little sidenote: Demosaiced DNG is usually referred to as “Linear DNG” but it’s clear from the fact that you can have a nonlinear LinearizationTable tag and demosaiced that "Linear DNG"s are not always linear!

I don’t have an Atomos device myself… Yet… The advantages (8-bit 4:2:2 vs 8-bit 4:2:0) aren’t quite worth it, although I’m tempted to get a Ninja V as a field monitor with LUT support that can grow to be used as a recorder in the future, vs. getting a cheap Feelworld monitor-only solution.

I fully agree matching in-camera V-Log will likely not be possible, but it should be possible to get extremely close (as in, visually indistinguishable in 95%+ of scenarios, and potentially better in those). Getting something that works without killing performance, well… That’s a whole other story.

It’s either used to revert the compression (e.g. NEF or Sony style to reduce to e.g. 10/12 bits before Huffman coding; even the new Apple ProRAW DNG uses it to convert to 12 bits and run lossless JPEG on top I think), or it’s a true tone curve.

It’s easy enough to dump it from DNG using exiftool -b -LinearizationTable to a text/csv file one can plot.

I’m on the fence about putting camera-specific parametric log curves in the rawproc tone “zoo”. Indeed, I think it would be the best way to lift linear, using the code right out of the vendor spec, but the tone tool already requires a scrollbar to get to the whole thing and I find myself scrolling back and forth to see the curve pane after I make parameter changes.

The 65536-entry 1DLUT is still my favorite solution; I might play with a smaller LUT with a logarithmic entry progression to better represent the left side of the curve…

Like this? varicam_dng_black_linearization_table.txt (5.4 KB)
I tried it with a couple different frames from different files and the table is identical, at least just skimming it by eye and checking the beginning and end. So not sure what this means, but it starts at 3977 and ends at 65536.