sony colors mystery

There are actually three ways for doing this:

  • directly in-camera pressing Q-menu during playback
  • using RAW Convert EX (it’s based on SilkyPix and supports film simulations)
  • for models newer than X-T1 you can also use Capture One Express (cameras older than X-T2 / X-H2 are supported, but without film fimulations) or, similarily for newer cameras, connect with USB cable and use "X Raw Studio - you see results on the computer screen, but to process images, camera’s engine is used.
2 Likes

This is something I wish Sony supported, because then it would be possible to put synthetic controlled inputs into the camera to make it easier to reverse engineer the output.

Yes, you can perform correlations between an arbitrary captured RAW and its output, but it’s a lot easier to put in something synthetic that covers many data points (such as feeding a HALD CLUT to the processing chain to see what comes out, or controlled gradients.)

1 Like

I’m not sure you can provide arbitrary input for this feature… Of course I’ve never used it and am not really interested, but it seems unlikely. Maybe someone who does use it can chime in.

It’s a lot harder, but not impossible, for proprietary raw formats. Uncompressed tends to be
much easier - determine the offset/arrangement of the image data, replace it while leaving all headers/metadata/etc intact.

For example, Sony’s uncompressed RAW is TIFF-ish enough that generating synthetic modified ARW files would probably take an hour or two of Python haxing - but since you can’t take an ARW and re-develop it in-camera, it isn’t very useful to do so.

Ah, gotcha. Even if you could shovel at haldclut in there, it’d likely just give you one redition of the in-camera film simulation, right? As far as I was aware, there is a lot more to the Fuji film simulations than basically just a lut, which is what we would get here, right?

If someone wants to fiddle with a Fuji raw and can get the haldclut identity image in there, I’d be happy to run it through my camera. I’ll also probably have an X-T5 soon, so it’ll have all the newest film simulations.

1 Like

I might try taking a crack at it in a week or two… I keep on putting too many projects on my TODO list though. :stuck_out_tongue: I’ve been on and off doing something similar with the Xphase 360 camera - feeding synthetic data into their postprocessing software to figure out exactly how their camera is saving out its JPEGs (nonstandard colorspace and transfer function, neither of which are documented, and I suspect they’re not even using a standard JPEG YUV->RGB matrix…). That project keeps on getting delayed partly because it is driven only by morbid curiosity at this point - the camera has too many flaws for me to consider buying it, and the manufacturer officially told third-party developers to pound sand last week. But there’s always that challenge of figuring out how the thing works anyway. :slight_smile:

True that it may be performing local processing, I’ve seen things either way in that regard, proving it would be easier with synthetic input data of course. :slight_smile: Which current Fuji do you have?

X-T3 currently, but that’ll be gone when the 5 comes at weeks end.

This is always sad, since we could have some way nicer things if there were open protocols glares at Fuji mobile app

2 Likes

Hmm my cameras shoot dng’s and I never considered trying to feed controlled data to the in camera raw processsing. You can to a lot of in camera raw development on those devices. I guess I wouldn’t use it myself but I know people who’d love the GR II positive film look as well as some other looks.

Would the raw format being DNG simplify this process?

1 Like

Yeah. It’s too bad that this goes all the way down to silicon manufacturers - there are so many deeply flawed Ambarella-based products that would become amazing if some opensource developers could hack the firmware. Same with the xphase - I’d buy one even with their horrendous official software support if I could hack the firmware (but that simply isn’t going to happen, and part of that is because so many SoC manufacturers are anal about SDK licensing…)

There’s some enormous potential in Ambarella-based action/dashcam stuff, but entirely wasted by camera manufacturers who have no clue what they’re doing and basically throw a fancy case around a reference design.

Yeah, especially if (again) it’s uncompressed DNG. The challenge is that even if DNG, even minor deviations from what the camera thinks it should be writing could break “on-camera raw development” stuff.

1 Like

I was looking at one ie the XT-5 but might go for the xh2. Big buffer and nicer evf…but I haven’t physically seen them so maybe size might change my mind…

1 Like

Same here. Here in Portugal it’s only 100€ apart so it’s a no-brainer. A bit sad because the X-T5 has better looks and ergos imo. The smaller size is very appealing too… I think I’m gonna keep the X-T3 for the more risky scenarios where I don’t have to ‘care’ for it, and get an x-h2 for birding and macro. That auto focus looks pretty good. In the end if I didn’t photograph scenes with high burst rates, I would go for the X-T5, no brainer.

1 Like

This is not easy …

1 Like

Looks like EXIF based black level PR is now in rawspeed/stable and is reffered to by darktable, so it will be in 4.2 release

1 Like