I’m working on some large panoramas and my panoramas often cover both very bright areas (like snow) and dark areas (shadows in valleys) which needs to be covered by the same exposure. Haze removal is great tool to get details from the dark areas.
However it seems like a bright patch in otherwise dark image impacts the whole image making it a way darker than it would have been if the bright patch was not there. These images are difficult to handle later in workflow.
Is there a way to handle this problem? Enabling “Show depth map” can help to discover which images will be a problem. It would be nice to have mask where one could paint over the areas which would not count in the haze remove algorithm, or maybe a to-/from-slider deciding range of brightness which the pixels would need to meet. Maybe there is a way to write a plugin/filter which would mask out the problem areas?
I’m not sure if the haze removal tool is really appropriate for a dynamic range compression operation…
Have you considered the DRC tool instead? Although in general, you need to be very careful when using operations like this prior to stitching. If you’re running operations like this on individual images, I’m kinda surprised that they aren’t completely screwing up stitching. The only times I’ve ever run dynamic range compression operations prior to stitching were on dual-fisheye images where the same algorithm is run on both exposures within a single image file.
Haze removal after stiching does not work well. Image to far too big for applying haze removal in one run. Even after splitting it into 16 equal columns RT crashes when I try to process one of the columns. It works for a while, but after some time it aborts with this error:
RawTherapee, version 5.8, command line.
Output is 16-bit integer.
Processing: 331-segerstad-main_21576X107157_x16y1.big.tif
Merging sidecar procparams.
terminate called after throwing an instance of ‘std::bad_array_new_length’
what(): std::bad_array_new_length
Each columns is about 14Gb uncompressed 16bit tiff. Total imagesize is about 290Gb big.
Any ideas on how to apply haze removal to such image?
Wow, that is multiple times the size of a Blu-ray Disc, where a double sided BD has 50GB. Well, in this case I would say “Pech gehabt”. Sorry.
I fear a real solution is not possible then. Your original problem was, that the haze removal algorithm delivers individual incomparable results on the single images which delivers a poor result after stitching the images together. This will happen too, if you succeed to have multiple columns.