That’s because the script is meant to work with the other scripts in that repo:
fs_gradient_capture displays a fullscreen gradient and captures bracketed images every 1/3EV over a fairly wide range - the idea is to point the camera you are calibrating at the screen to get calibration data. Manual bracketing in-camera may be sufficient, I’m not sure how good the resulting response curve will be. It definitely helps to have a smooth gradient that you are capturing. I may change this script to use just a white gradient instead of multiple colors, as I’ve seen reports that out-of-gamut inputs to the camera can throw off response recovery.
robertson_process takes the data captured above and processes it using OpenCV’s calibrateRobertson to determine the response curve. This saves out an .npy file with the response curve. Alternatively you can use LuminanceHDR in response recovery mode (make sure to actually choose an output file):
I just pushed support for the response curve output file that this saves to my repo. For the time being. LHDR seems to do a better job than my OpenCV script in terms of quality of the response curve. At some point over the next few weeks I’m going to reimplement Robertson’s algorithm in Python so I can tweak it to handle the corner cases better.
Once you have a response curve file (.npy from my script, or .m from LuminanceHDR), the latest script will turn that into an ICC profile. Note that for now, the script assumes a camera’s response is common to all channels and that the G channel has the best data. At some point I may change this, one of my TODOs is to try and force the calibration to use a common response curve across all channels (since most cameras do this)
Also, the script requires a version of imagecodecs released two days ago at the earliest, and may require today’s release. Not sure if those have propagated to PyPi yet