afre's G'MIC diary

In Action

afre_cleanphoto graduated from the misc section. :partying_face:

ATM, it is only available in CLI. Remember to gmic update. (I will leave the rush commands afre_cleanphoto0 and afre_cleanphoto1 in afre.gmic for now, even though this replaces them.)

afre_cleanphoto:
    2<=size<=10,1<=_recovery<=100,-50<=_xy_sections<=50,_mask1>=1,_mask2>=1,...

  Clean dust and scratches from photos.
  Default values: 'size=3', 'recovery=10', 'xy_sections=1' and 'mask1=1'.

  'xy_sections' has special properties.
  - 'xy_sections>0': process masked regions specified by 'mask1','mask2',...
  - 'xy_sections<0': assign and display numbered regions.

In this segment, I will show you what it does. There is nothing special about afre_cleanphoto per se; it is a matter of thinking through the problem. Take the following image, which has dust fragments that we want to remove or reduce:

Since the fill algorithm is simple and unintelligent, we don’t want it to alter more of the image than is necessary, so the plan is to separate the image into regions, the simplest of which is a grid that would be small enough to grab areas of similar features. Roughly for this image, that would be a 30x30 mapping.

afre_section helps us with this and is built into afre_cleanphoto. You invoke it by giving the third parameter ‘xy_section’ a negative number; in our case, I have chosen -30 to subdivide the image into 900 subregions. Again, this is a minimalist method but that is our objective. It may introduce tiling artifacts and I have applied weights to mitigate that somewhat.

gmic 502.jpeg afre_cleanphoto 6,10,-30 output 502-section.png

If xy_section=1, then the command will filter the entire image as one big section. Besides unnecessarily altering unblemished areas, this kind of processing is rather slow. Therefore, I would rather take a hit in introducing tiling artifacts that weights won’t ameliorate completely.

Now, we are tasked with choosing which areas to modify. I have selected about 80% of the blemishes.

gmic 502.jpeg afre_cleanphoto 6,10,30,152,182,725,755,296,297,784,814,816,845,846,871,775,798,330,360,536,224,220,655,715,240,270,852,706,818,453 output 502-cleanphoto.png

Now for the result. You may zoom and toggle among the images in this post to see the difference. Better than before!

Note that it would have probably been easier on the command if we had considered doing some pre-processing. This input image has lots of things needing correction: lens distortion, vignetting, chromatic aberration, scanning artifacts, nonlinear gamma, file type and compression artifacts, among others. There is only so much a single command can do in light of these.

3 Likes

I rewrote afre_cleanphoto to be simpler yet faster and better (I hope), though the testing is next to nil. (Only on 502.jpeg; living dangerously. :stuck_out_tongue:)

Highlights

1 One less parameter, which I like.
2 Connected regions are processed as one region. No splitting tiles: faster in place processing.
3 Inpainting using inpaint(or inpaint_pde) instead of morphological open. May have the downsides I have shown in another thread on inpainting but when it works it looks less noticeable than open.

Discussion

2 and 3 are in fact two separate problems and could be two independent commands. 2 could be a generalized command used for user specified local processing, saving time and leaving parts of the image unprocessed. Next steps would be to make it smarter and blend better. 3 has room to grow too. It isn’t exactly seamless, doesn’t content fill evenly and removes and doesn’t add in texture. Of course, none of this is easy to do.

Only when there is more than one item, which there isn’t. Good catch.

I decided to do it that way because it makes more sense.

The first question is addressed, and the second?

Update

New command and filter afre_portraitmontage. I may add features in the future and conditional checks to make it more robust later. Currently, if you follow the guidelines in the descriptions, you should be fine. Let me know if you have any issues or suggestions. Testing by myself is boring and ineffective.

As usual, wait a hour or so before gmic update and updating your plugin filters.

afre_portraitmontage:
    0<=spacing<=10,0<=_colour={[R,G,B]}<=255

  Generate portrait montage.
  Default values: 'spacing=5' and 'colour=230,255,230'.

  Portraits should be centred and have the same dimensions.

3 Likes

Update

afre_cleanphoto is much harder to code than I had bargained for. Hope the time and effort is worth it. After a couple of weeks of hair loss, I have finally tamed my local processing strategy. The command should be able to parse any regional mask list without fail and suppress any bad input.

Now that I have gotten that out of the way, inpainting is next. Currently, two methods are available to test. I see inpainting as a 3 step process: detecting the blemishes, masking them accurately and filling in the gaps convincingly. Each step is difficult in its own right and subject to state of the art research (by people who are smarter than I and do it for a living). Add in local processing: it is quite the mythical dragon!

1 afre_cleanphoto uses the method I originally committed and shared. It seems to detect and mask consistently, at least for test image 502.jpeg, but is rather plain when it comes to the inpainting.

2 afre_cleanphoto1 is weaker in detection and masking but better in inpainting. Not the most convincing of the set but is available because I happened to use it to develop local processing part of the code.


Small update

1 Books are more substantive than papers. The public library happened to provide access to a book I wanted to read. Learning about median filters and robust stats. Enabled this fun adventure.

2 Noticed that afre_reorder doesn’t work anymore: to fix.

1 Like

Major Updates (edit improved post, added new points)

As usual, remember to update your G’MIC app and plugin, and update the commands and filters for each. Feedback is appreciated and let me know how you are using these commands. Enjoy!
 
1 Improved afre_darksky CLI GUI (formerly fx_darken_sky). Supported by the new afre_contrast (see 2). Future work would be to add a local contrast option. Dramatic darkening example:

Image and original deep sky method from Deep blue sky effect.

Settings

image

afre_darksky:
    blend={ 0=softlight | 1=overlay },-10<=_contrast<=10,_smooth_method={ \
     0=fast_approx | 1=slow_accurate },0<=_smooth_radius<=3,_channels={ 0=RGB | \
     1=CIELAB_L }

  Enhance landscape by darkening the sky.
  Default values: 'blend=0', 'contrast=0', 'smooth_method=0', 'smooth_radius=0' and 'channels=1'.

 
Original

Dramatic (I have updated the filter since, so it is a little different from this.)


 
2 NEW Added afre_contrast CLI GUI. Drives the contrast parameter of afre_darksky, which is a Δ% in contrast (or more precisely the standard deviation). Uses a simple curve; by itself, it doesn’t look very good when pushed. Hint the new afre_localcontrast is a better standalone contrast modifier.

image

afre_contrast:
    -10<=sd_change%<=10,_method={ 0=fast_approx | 1=slow_accurate }

  Enhance contrast with standard deviation.
  Default values: 'sd_change%=0' and 'method=0'.

 
3 NEW Added search_dichotomic CLI. This is the command version of the search_dichotomic() macro; I am able to do more with this one. Currently, it helps afre_contrast determine the right contrast level for a given parameter setting.

search_dichotomic:
    "increasing_fn",target_y,_precision>0

  Find parameter for function such that 'target_y' is met in image.
  Default value: 'precision=1e-3'.

  - Return 'nan' if search fails.
  * Credit: David Tschumperle.

 
4 NEW Added afre_localcontrast GUI CLI.

image

afre_localcontrast:
    1<=radius<=10,-100<=_amount<=100

  Enhance local contrast.
  Default values: 'radius=1' and 'amount=50'.

Hint Keep radius at 1 or as low as possible. Otherwise edges will bleed over.
 

5 NEW afre_jabz afre_ijabz afre_jchz afre_ijchz  Do you know what Jzazbz and its polar cousin JzCzhz are (besides being a mouthful to read)? In a nutshell they are newer than and outshine L*a*b* and friends. I have shortened the names: e.g. afre_jabz represents afre_rgb2jzazbz and afre_ijabz is its inverse transformation.
 

That is all for now. This brings us a total of 35 commands and 15 filters. Not bad for a person who started with no coding background or mad math skills.

# List of Commands and Filters
#---------35-----------15-----
# GUI CLI : afre_contrast afre_localcontrast afre_darksky afre_edge
#   afre_cleantext afre_vigrect afre_vigcirc afre_softlight
#   afre_sharpenfft afre_contrastfft afre_gleam afre_halfhalf
#   afre_portraitmontage
# CLI only: afre_jabz afre_ijabz afre_jchz afre_ijchz afre_gnorm
#   afre_hnorm afre_sdpatch afre_reorder afre_section afre_cleanphoto
#   afre_cleanphoto1 afre_compare afre_log2 afre_y50 afre_orien
#   afre_conv afre_box afre_gui0 afre_gui1 afre_gui0c afre_gui1c
#   search_dichotomic
# GUI old : fx_gamify fx_hnorm

I think you’re making commands and filters at a faster rate than I did. I have about 50 in a year. But, then again I slowed down as I decided to take breaks more often and work on Krita more frequently.

Reflection

Stats (totals, commits and SLOC), while fun, under-represent the work I put into afre.gmic. Work is an interesting term for a hobby but that is what it usually is; I am not a natural after all. Generalizing a function (command or filter) for CLI, GUI, basic or advanced use cases takes an inordinate amount of time to implement and balance. There is also the constant pursuit of robustness and speed; asking the right questions and finding better ways to tackle the problem through eureka moments and dependable trial and error.

Updates and thoughts

afre_contrast afre_localcontrast afre_darksky have received another round of refinement. I will highlight the ones that will apply to all my commands and filters one way or another.

1 These will be the first GUI plugin filters to include a note that brings to attention the inaccuracy of the preview of the plugin. Unfortunately, no matter how much I compensate, this will always be an issue and users will always be puzzled by it. This problem affects the attractiveness of a filter too. If the preview contains artifacts or doesn’t show the results properly, then the filter is deemed useless, or worse, the author is incompetent. Anyway, here is an example of the note:

image

2 Filtering can yield unnatural results. Every technique has its signature deficiencies. One way I have been tackling this is blending them out. Trouble with that is that it isn’t a catch all. This applies to people who publish research papers. They may dress up their results but the algorithms often aren’t as awesome as they characterized them to be.

Hopefully, the weights used to blend afre_localcontrast are robust enough. Actually, they weren’t good enough, so I added another set. This one is meant to soften the clipping and so should be transferable to any command or filter that clips.

3 afre_darksky lost its last parameter, which allows a choice to act on lightness only or not (which would lead to a more colourful result). I decided to simplify by removing the option. A darkened sky would inherently be less colourful. Fun hue changes interfere with accuracy and predictability. This approach will be applied to future adjustments in afre.gmic.


Updates from the germ factory

The contrast triumvirate have been buggy and misbehaving. Been scrubbing and polishing since I first rewrote afre_darksky. Sorry about all of these changes (if you have been following).

1 afre_contrast
a Added clip attenuation weights as I did for afre_localcontrast, which will allow the contrast enhancement to retain more detail in the darkest and brightest areas.

b Removed search_dichotomic because it slows down the code and makes it unpleasant to debug. Awesome but definitely goes against my desire to create and share minimalist yet robust commands.

c Settled on adjusting contrast based on the luminance component via afre_y50 and afre_orien. This should give the contrast enhancement a natural appearance; at least compared to RGB and L* D50.

2 afre_localcontrast
a Fixed multiple bugs; in particular, the previous version failed to attenuate the brightest isolated areas due to NaNs and clipping during blending.

b To dos
i Allow the user to specify higher radii, though higher radii would mean more pronounced halos.
ii Halos happen along strong edges because of the neighbourhood filtering. I would have to attenuate those places with yet another set of weight maps and blending.

3 afre_darksky
Changes to afre_contrast will affect this command.


As usual, update your G’MICs and their filters. Hope you find a gem that would help you make the most of your images. COVID-19 and life is hard. Stay strong and delight in your blessings.

1 Like

I like having the fun options there in filters (not that I use this one), but for the sake of predictability the less colourful one should be the default.

New commands

For the sake of brevity, I will start shortening “commands and filters” to “commands” because “filters” in my mind refers to what we would call “processing” in this forum.

Here are two “new” commands that actually aren’t: from my private collection now ready to share.

1 afre_details CLI GUI is similar to split_details but uses my custom guided filter afre_gui0. On my computer it takes 2x+ the amount of processing time. But it is much more useful.

image
 

2 afre_texture CLI GUI depends on a detail or frequency scale generated by afre_details. Currently, it allows the attenuation or enhancement of one scale only.

Might enable it to act on multiple scales since every app and their mom or pop seems to feature that. Likely not, since I see it as an unnecessary complication and one which invites over-processing.

You’re actually inspiring me to implement guided filters everywhere in Krita.

They’re Done!

Rewriting afre_darksky lead me to write afre_contrast and afre_localcontrast. Made so many changes along the way that they are nothing like their first incarnations. Now, I think they behave the way I want them to, though my opinion is still in flux. Users should be able to use them with some confidence.

Feel free to tweak the parameters and try as many combinations as possible for as many images as possible. Let me know if there are any bugs, gotchas or anything you would want to add or change.

In the plugin, they can be found under the “Colors” category. You can search for them: their names are Dark Sky, Contrast and Local Contrast.

Room for improvement

As noted, afre_localcontrast can have haloing about edges by nature of local filtering. The size of the halo is the same as that of the neighbourhood. Currently, the user can mitigate that by simply setting the radius to 1. In fact, I do most of my own processing that way.

The next step of development would be to attenuate the contrast enhancement in the neighbourhood of the edges, or use a method that doesn’t produce halos in the first place.

Other to-dos and remarks

Next, I will be looking into finalizing afre_texture, testing its parameters’ range and feel. The feel is very important to me. How each value modifies the image as you move the sliders in the plugin and what the changes look like. It has to be organic. Moreover, I have a tendency to be minimalist and conservative. I need to allow the user to push it a bit at max settings.

An afre_chroma may be coming soon from my private collection. Based on afre_jchz, it will give the user an opportunity to tidy up the colour at the end of the processing workflow. afre_chroma would depend on a afre_brightness command.

I am at the stage of command development where a new command or a change to a command would inevitably involve many other commands. Well, this has been so for a long time; I just haven’t written about it. It can be tedious but I think I am enjoying the scenic route. I feel it is more motivating than fixing one bug after another; esp. when a command is broken, which is bad for the user and my reputation. But then there are these exciting tasks… the woes of a hobby. :nerd_face:

1 Like

There is a problem with my implementation of Jzazbz (afre_jabz). Compared to what @snibgo gets from his test image toes.png (http://im.snibgo.com/toes.png), the image stats I get are quite different.

gmic toes.png / 257 srgb2rgb afre_jabz s c

Jz -> min =  0.000652   max = 0.0158496  mean = 0.00671463  std = 0.0028268
az -> min = -0.00395627 max = 0.00586999 mean = 0.000780447 std = 0.00162143
bz -> min = -0.00443439 max = 0.00734315 mean = 0.00161774  std = 0.00137324

The only change I made was the factor of the epsilon from e-11 to e-10 because e-11 is too small to fend off NaNs. Could also be math and precision differences among gmic, im and octave, or because my code is faulty.

Edit One thing I just remembered is that calculations outside of gmic's math interpreter have less precision. Could be as simple as not using mix_channels but that would make the command slower.

Edit Forgot to mention that I hard coded PQ=10000, which is consistent with the reference code.

If you (dev or otherwise) have time, please take a look at the commands. In the meantime, I guess it would be best for me to check against the reference m-code step-by-step (https://github.com/quag/JzAzBz/blob/master/matlab/JzAzBz.m).

I’ve been messaging with afre, but I’ll transfer to this public thread. Many heads are better than two.

I wrote the code in ImageMagick for the Jzazbz colorspace. But I could not directly compare results with another implementation, so I’m glad I now have a chance.

afre_jabz and IM give small differences in Jz, but larger differences (up to 50%) in az and bz.

toes.png is 16 bits/channel (ie maximum values 65535), so afre correctly divides by 257 to get GMIC’s usual maximum of 255. Reviewing afre_jabz (which also calls rgb2xyz), the only difference I can see with the IM code is afre’s “epsilon”, which he has explained, and that should only affect Jz.

@afre: Have you implemented the opposite command in GMIC, so you can test a round-trip?

I intend to copy afre_jabz into my user.gmic, so I can do more careful comparisons. But it is significantly past my bedtime here in the UK. G’night all.

It can be found here:

Ping @garagecoder because of his previous help.
Ping @Carmelo_DrRaw because he has an implementation in PhotoFlow and might want to chime in.

@snibgo, could you share a link to your code ?
(I’m not capable of understanding all the intricacies of the Matlab code).

@afre @snibgo @garagecoder I am very much interested in this cross-check! I have the plan to replace Lab by Jzazbz everywhere in the Photoflow code, particularly where reliable Hue and Chroma values are needed.

As a first step, I am adding the Jzazbz values to the color sampler, so that one can easily compare RGB, Lab and Jzazbz with test images. I will let you know as soon as the code is committed and packages are available for testing.

http://im.snibgo.com/jzazbz.htm

The explanation for the extra 0.5s on az and bz is

@afre @snibgo looking at Alan’s code it seems to be equivalent to what I have in Photoflow, apart for the 0.5s added to az and bz.

I would propose to compare few simple numerical examples, before considering a more complex image. Here is what I obtain starting from a pure sRGB red (using Alan’s terminology for the intermediate variables and a peak luminance of 10000):

RGB: 255, 0, 0
XYZ:    41.2417, 21.2657, 1.9312
XpYpZp: 47.1383, 28.0575, 1.9312

LMS:    35.8541, 22.0463, 7.93806
LpMpSp: 0.218877, 0.180661, 0.116117

Izazbz: 0.199769, 0.0996431, 0.0912486
Jzazbz: 0.0989701, 0.0996431, 0.0912486

Had a quick look also, and one question for @afre.
The @snibgo’s function ConvertXYZToJzazbz() seems to be quite straightfoward to convert into a single G’MIC fill command, so why didn’t you choose this path ?
Possible advantages:

  1. The G’MIC math parser does all its computation using double-precision values (64bits), whereas a classical G’MIC pipeline has to store manipulated values in float-valued images (32bits), so using a single call to a fill would allow more precision in the calculus.

  2. The color conversion is obviously done pixel by pixel, which means doing it in a single fill would allow easy parallelization of the calculus.

  3. Not checked entirely, but it seems to me that (almost) a simple copy/paste of the @snibgo’s function could be enough, as the G’MIC math parser is inspired from the C language.

Maybe something like:

#@cli rgb2jzazbz : illuminant={ 0=D50 | 1=D65 } : (no arg)
#@cli : Convert color representation of selected images from RGB to Jzazbz.
#@cli : Default value: 'illuminant=1'.
rgb2jzazbz : skip "${1=,}"
  l[] if isnum("$1") illu={"$1?1:0"} else if ["'$1'"]!=',' noarg fi illu=1 fi onfail noarg illu=1 endl
  e[^-1] "Convert color representation of image$? from RGB to Jzazbz, using the D"{arg(1+$illu,50,65)}" illuminant."
  rgb2xyz $illu xyz2jzazbz

#@cli xyz2jzazbz
#@cli : Convert color representation of selected images from XYZ to RGB.
xyz2jzazbz :
  e[^-1] "Convert color representation of image$? from XYZ to Jzazbz."
  f ${-_jzazbz_const}"
    Xp = Jzazbz_b*i0 - (Jzazbz_b - 1)*i2;
    Yp = Jzazbz_g*i1 - (Jzazbz_g - 1)*i0;
    Zp = i2;
    L = 0.41478972*Xp + 0.579999*Yp + 0.0146480*Zp;
    M = -0.2015100*Xp + 1.120649*Yp + 0.0531008*Zp;
    S = -0.0166008*Xp + 0.264800*Yp + 0.6684799*Zp;
    tmp = (L/peakLum)^Jzazbz_n;
    Lp = ((Jzazbz_c1 + Jzazbz_c2*tmp)/(1 + Jzazbz_c3*tmp))^Jzazbz_p;
    tmp = (M/peakLum)^Jzazbz_n;
    Mp = ((Jzazbz_c1 + Jzazbz_c2*tmp)/(1 + Jzazbz_c3*tmp))^Jzazbz_p;
    tmp = (S/peakLum)^Jzazbz_n;
    Sp = ((Jzazbz_c1 + Jzazbz_c2*tmp)/(1 + Jzazbz_c3*tmp))^Jzazbz_p;
    Iz  = 0.5*Lp + 0.5*Mp;
    az = 3.52400*Lp - 4.066708*Mp + 0.542708*Sp;
    bz = 0.199076*Lp + 1.096799*Mp - 1.295875*Sp;
    Jz = (1 + Jzazbz_d)*Iz/(1 + Jzazbz_d*Iz) - Jzazbz_d0;
    [ Jz,az,bz,Jz ]"

#@cli jzazbz2rgb : illuminant={ 0=D50 | 1=D65 } : (no arg)
#@cli : Convert color representation of selected images from RGB to Jzazbz.
#@cli : Default value: 'illuminant=1'.
jzazbz2rgb : skip "${1=,}"
  l[] if isnum("$1") illu={"$1?1:0"} else if ["'$1'"]!=',' noarg fi illu=1 fi onfail noarg illu=1 endl
  e[^-1] "Convert color representation of image$? from Jzazbz to RGB, using the D"{arg(1+$illu,50,65)}" illuminant."
  jzazbz2xyz xyz2rgb $illu

#@cli jzazbz2xyz
#@cli : Convert color representation of selected images from RGB to XYZ.
jzazbz2xyz :
  e[^-1] "Convert color representation of image$? from Jzazbz to XYZ."
  f ${-_jzazbz_const}"
    tmp = i0 + Jzazbz_d0;
    Iz = tmp/(1 + Jzazbz_d - Jzazbz_d*tmp);
    azz = i1;
    bzz = i2;
    Lp = Iz + 0.138605043271539*azz + 0.0580473161561189*bzz;
    Mp = Iz - 0.138605043271539*azz - 0.0580473161561189*bzz;
    Sp = Iz - 0.0960192420263189*azz - 0.811891896056039*bzz;
    tmp = Lp^(1/Jzazbz_p);
    L = peakLum*((Jzazbz_c1 - tmp)/(Jzazbz_c3*tmp-Jzazbz_c2))^(1/Jzazbz_n);
    tmp = Mp^(1/Jzazbz_p);
    M = peakLum*((Jzazbz_c1 - tmp)/(Jzazbz_c3*tmp-Jzazbz_c2))^(1/Jzazbz_n);
    tmp = Sp^(1/Jzazbz_p);
    S = peakLum*((Jzazbz_c1 - tmp)/(Jzazbz_c3*tmp-Jzazbz_c2))^(1/Jzazbz_n);
    Xp = 1.92422643578761*L - 1.00479231259537*M + 0.037651404030618*S;
    Yp = 0.350316762094999*L + 0.726481193931655*M - 0.065384422948085*S;
    Zp = -0.0909828109828476*L - 0.312728290523074*M + 1.52276656130526*S;
    X = (Xp + (Jzazbz_b - 1)*Zp)/Jzazbz_b;
    Y = (Yp + (Jzazbz_g - 1)*X)/Jzazbz_g;
    Z = Zp;
    [ X,Y,Z ]"

_jzazbz_const :
  u "const Jzazbz_b = 1.15;
     const Jzazbz_g = 0.66;
     const Jzazbz_c1 = 3424/4096;
     const Jzazbz_c2 = 2413/128;
     const Jzazbz_c3 = 2392/128;
     const Jzazbz_n = 2610/16384;
     const Jzazbz_p = 1.7*2523/32;
     const Jzazbz_d = -0.56;
     const Jzazbz_d0 = 1.6295499532821566e-11;
     const peakLum = 10000;"

not tested thoroughly, but at least, the transform seems to be correctly reversible, with

$ gmic sp lena rgb2jzazbz jzazbz2rgb