Just a few general remarks about ressources, from someone who reads image processing academic papers every week…
Image processing is an area that interests several professions: mathematicians, electrical engineers, computer scientists, colour scientists (psychophysicists ?), and artists. Most of image processing research is aimed at medical imagery (MRI, X-rays), telescopes and microscopes imagery (biology and astronomy), satellite imagery (GPS/cartography), or smartphone-oriented and fully automated photography (auto colour, auto exposure, auto tonemapping, optimised contrast, etc). Only a small subset is aimed at actual artistic photography.
As such, many “groundbreaking” algorithms (denoising, deblurring, etc.) are aimed at monochrome images, and perform quite bad in RGB (creating chromatic aberrations), because they don’t care about the consistency between channels. The are fine, however, for X-rays or radio-telescopes.
Also, mathematicians and electrical engineers tend to be clueless about psychophysics. Colour scientists tend to be sloppy on maths. Computer scientists tend to lack some physics and treat images as random numbers with no relationship with reality. And all of them usually lack chemistry background and treat photography as if it was born with computers, disregarding 160 years of film legacy (except the guys at Kodak and Fuji).
As a consequence, you see people treating RGB data as “colour” or slopily apply colour models without asserting their conditions of validity (the most “funny” being the guys testing the robustness of their demosaicing algos in YUV on white-balanced corrected sRGB files artificially mosaiced from film scans, because you need to go through XYZ first to go to YUV later, but XYZ needs full RGB vectors, meaning it needs the already-demosaiced picture in input - so you demosaic in a colour space that is valid only for already-demoisaiced data).
Coming from mechanical engineering, where we can actually kill people if we turn the corners round on the theory and calculations, all that sloppiness and lack of due-process in image processing is just killing me.
I really wish research teams were more multi-disciplinary and embedded real-life photographers and painters, because the results that pass for “the best” in experimental results are often surprising (PSNR and RMSE don’t make an image guys). It seems everyone is solving local problems at which they look from their narrow specialty, and nobody seems to care about photography as an ordered pipeline.
Back to the topic, using academic ressources, you need to check what the specialty of the guy is and what kind of image processing he does (for cameras, microscopes, telescopes, X-rays, etc. ?). Because there are a few topics that are 100% signal processing (noise & blur), but most of them interleave physics, psychology and ergonomics in non-trivial ways and end-up with some practically unusable Matlab code solving an ill-posed problem.