Capture One improved the way they do camera profiles

I think there can be abstraction layers for various levels of user engagement. Video game designers have a knack for that. A good game can cater to novice and esoteric audiences alike. A great game can train a player to become a master. For pros, there is always nightmare mode. For me, that’s G’MIC, where I can do custom stuff. I know I can do it in dt, etc., but I stubbornly choose to go my own way. :stuck_out_tongue:

1 Like

A game produces an user-experience as the final product. A photo software is supposed to produce a visual result conveying a personal expression as the final product, and that result might have to connect with further technically-bounded pipelines (printing, publishing or streaming). One is a toy, one is a tool.

Unless I’m mistaken since the beginning and photo software are indeed meant to produce an user-experience where people can feel like photographers without the burden of training in exchange for a couple of hundreds of €/$.

That’s where we need to be brutally clear about intent:

  • there is the Kodak mass-market consumer approach (“press the button, we do the rest”), targeting middle-class men with a technophile side who will pay no question asked,
  • and there is the Kodak niche-market high-end approach, targeting people doing their own prints themselves and looking for a precise result.

Both goals are irreconcilable because they end far beyond cosmetic GUI changes in the low-level design of color algorithms and in the degree of expected pixel correctness vs. run time.

Spectrally measuring cameras is a first and foremost a logistical conundrum. Assembling cameras at the same place as a measurement device.

Now, there’s data out there, in nooks and crannies of the internet. I’ve collected all I can find on my local hard drive, but that in itself doesn’t scratch the itch. What I have done is to start a github repo:

https://github.com/butcherg/ssf-data

Which at present contains data and icc profiles for all the cameras I’ve measured as well as the camspec collection (see my SSF posts for more on that). For each camera, one will find the SSF data in a .csv text file, a dcamprof .json file of the same data, and a LUT ICC profile created from the data with dcamprof. There’s also a README that synopsizes all the relevant source attributions, the measurement and analysis methodology, and the DE report dcamprof produces while making the profile. Here’s a good example, for the Nikon D700:

https://github.com/butcherg/ssf-data/tree/master/Nikon/D700/camspec

Note that for some cameras I have multiple data sources, so I do an entry for each.

Right now, the only external source I’ve included is the camspec collection, as their posted copyright and license allows me to do this without negotiation. I have other datasets, but the licensing is less clear. What I may do with these datasets is to post just an ICC file, as the act of producing such constitutes an analysis of the encumbered data, which as far as I can determine falls in to the “uncopyrightable” list of things. I still have to research that further, but I think it offers a potential path to expanding my collection.

Getting cameras in front of measurement devices is still the challenge, but having a place to collect the results greases the skids, IMHO…

5 Likes

That is a black and white argument and as such is too simple.
Kodak Portra (Pro Photographers tool) and Kodak Vision3 (Pro cinematic tool) both choose a look for the user. Just like Kodak Gold (consumer grade) does.
For Vision3 there might be workflows which are able to get back or close to scene-referred light, BUT even then all the gamut-remapping and dynamic range handling is as done by Vision3 (and subsequent positive-film stocks).

Both approaches, kodak-consumer and kodak-professional, are making decisions for the user. And for both, they have 60 years of development time to make it look good and fit many use cases. This includes projected film and printed media i.e. vastly different output dynamic ranges. Your argument is flawed. Precision can be achieved although a ‘look’ is somewhat chosen. And on top of that: I still have to see a digital workflow which produces an output that looks as good as Kodak Portra or Fuji Pro400H.

You think a game is not the conveyance of personal expression?

No, that is a simple choice to make. An important part of design is taking decisions.

Who mentioned the look ? I’m talking about the tooling. Cameras, darkroom equipments, workflows, trichromatic enlarger color heads, and those awful film sensitometry curves that tell you how much time you need to develop your film at what temperature in what developer depending on the contrast you want.

The film emulsion might be defined by the manufacturer, there is still some interpretation latitude during the developing and printing process. That stuff was handled backstage by the lab techs or demanding photographers and unknown to most amateurs photographers (who still believe today that film was not retouched and that film color ended up with choosing an emulsion).

Not as a product that can be sold afterwards, no.

1 Like

I think a company as large as Phase One will be able to afford the capital investment into a monochromator. I think it’s also reasonably safe to assume Adobe’s profiles are derived from monochromator SSF measurements.

Monochromator-based SSF measurements are going to be better than Glenn’s approach IF you can afford the equipment. The key benefit of his approach is affordability for the rest of us, especially those of us who are concerned that Adobe or Phase One will “cook” the profile behind the scenes in strange and unexpected ways.

2 Likes

I cannot recommend to base ones decisions on ill-posed problems/questions. I saw a logical fallacy and pointed that out. Logical fallacies can be deathtraps for decisionmaking.

You think he comes close? There is a whole discussion to be had about this as well.

Are movies a conveyance of personal expression for you?

I think it’s quite astonishing how good the quality of Glenns profiles is, given that he basically lacks a bulletproof absolute calibration (which will be the really expensive part). A homebuilt motorized monochromator should cost double maybe triple the amount Glenn spent. Wavelength calibration with laserdiodes or H- and Hg-discharge lamps is no witchcraft, but measuring the absolute photonflux for each wavelength will be. I do not have good ideas how to do this without relying on some assumptions of quantum efficiency at some point.

Deciding what is “great out of the box” takes skill, talent and research. It being subjective is completely beside the point. The film stocks are a great example of that even if those looks are a to strong to be defaults in digital software.

I’m not asking for darktable to achieve this as the skills are hard to come by but your statement was general. There is no contradiction in great out of the box and fine grained control. The order of the pixel pipeline is also completely irrelevant. As you have already done in darktable there is an out of the box pipeline order and a default filmic setting. Great! You could spend 10 years tuning those defaults and building a great of of the box experience which is what most other software has done.

Great out of the box means that you make decisions about which output medium to prioritize and what looks good. I’d argue that for commercial software this is probably as important and difficult as the programming itself and one of the main draws of each software. Now many of these do sacrifice low level control but that’s a different but connected choice.

1 Like

I still regularly flick through my digitized portra shots and just sigh and feel a bit sad :cry: I also flick some luts on but feel that even if they were perfect portra sims I would hesitate to use them despite looking great. I’m just suspicious of my nostalgia despite loving the look. Complex psychological problems are surfacing :smiley:

1 Like

If I can go just a little bit off topic. I’ve found yedlin.net resources that @anon41087856 shared very interesting but I don’t quite understand everything he’s saying. Can you explain me what spatial fidelity is? I don’t know how to find it online.

In this video Steve Yedlin, ASC - ResDemo “PART 2” he is using the term “spatial fidelity”. Does he mean the same thing as image fidelity? He is talking that halation is a reduction in spatial fidelity. I think he’s saying it doesn’t look like the reality on set?

Believe me, I’m astonished also… :laughing:

No doubt, a monochromator setup will yield a better measurement, mainly in consistency of results. But, being an engineer at my essence, I’m always looking for “good enough” in all the science, as that perspective drives more deployable solutions.

Regarding quantum efficiency, I’ve given a lot of thought to @JackH’s questions about this, and what I’ve come to is that SSF input to profiles is less about absolute measurements than about how the three channels differ from each other at each wavelength. dcamprof takes “normalized” input, that is, normalized to the largest value among all three channels, so Anders clearly doesn’t care about absolute values. And, demonstrated “good enough”…

4 Likes

Thanx. Great link , I enjoyed the video

I think he is avoiding the term ‘resolution’ like the plague. I think rightly so. Spatial resolution is a far less complex concept than what he alludes to with fidelity. With analog film, the more contrast you image on the film plane, the better it retains it. Film resolution has an MTF-curve like a lens in this regard (loosly speaking). Digtial sensors have this in principle too, but to a much lesser extent, below nyquist you sparsly sample the signal, above it you might get more and more aliasing (mixing of high to low frequencies)

You could say that halation is an artefact of the capture-process. It washes out or bleeds into the local neighborhood, depending on light intensity (and color which his node tree seemingly doesn’t account for). Cinestill film has the anti-halation layer of cinema-film removed, check Flickr for how bad halation can look. With anti-halation, this is very well controlled except for the most overexposed areas.

I hope I did not come off as disrespectful! I think this is the most significant photography-open-source-thing that happened the last year. It’s extremely helpful to know that such a setup delivers copyright unimpeded profiles that deliver THIS kind of quality.

That kind of makes sense to me. The color filter accuracy should not depend on absolute throughput. But for measuring the relative throughputs, you need to have a way to check that a sensor doesn’t have too much of a wavelength dependent skew, or local dips or bumps in efficiency. Or…or you full-spectrum-measure the same sensor with and without CFA…hmm. I’ll have to think about that.

3 Likes

No analogy is perfect by nature of the literary device. The video game example is meant to illustrate the two dichotomous approaches to design: at the offset, one must decide whether the software is to be user-friendly or technically robust first. In this forum, we tend to dismiss the former as nonsense. However, that is a reductive view. It is only so if we sacrifice one for the other. Accessibility is a very important aspect of life and tenant of FLOSS, but so are technicalities and ~science~.

By nature of economics, commercial software tend to start with accessibility or must at least have the veneer of polish. Whether theory and implementation is done well or not is another story. If it is approachable, people will use it and benefit from its use. FLOSS software tend to be either very simple or very low level complex because of who the devs are and what their philosophy is. In general, they want to make a product that is different from the powerhouses out there. There is no point in competing for acceptance or dollars.

The real work is moving in the direction of sophistication for the mainstream and easy of use for the esoteric. Making software or doing anything in life for that matter requires nuance, practice and reexamining current models to see what can be done to compensate for their flaws. Sure, there are physical realities but decisions have to be made and usually those decisions have consequences that require reworking.

Companies we love to hate have been doing a lot of that internally while trying hard to keep their fiduciary commitments. That is not an easy task!

1 Like

Think of an imaging system as a black box whose input is light from the scene and whose output is its spectrum at a number of sample points within the scene. The box represents a number of filters with names like lens, filter stack, microlenses, cfa, silicon responsivity etc.

If the photon spectral distribution of a point source is known and we measure the relative spectral count out of the box we have for all intents and purposes characterized its spectral response because we can compare the quantal distribution of the illuminant, say, at every nm between 400 and 700nm to the output of the box at the same wavelengths. If we assume that the system is linear, as we do, then the response is simply the output divided by the input, wavelength by wavelength (1nm by 1nm). If the illuminant changes and we know its quantal distribution, we can predict the box’s output - and, more relevantly, vice versa.

That’s color science working in 301 dimensions. Works similarly with 3 dimensions. Except that with only 3 uncontrolled dimensions there can be more slack in the system and therefore in the potential response of the box or equivalently in the estimated irradiance. In that case we choose what we think is the best compromise for the scene/illuminant combination, out of a number of possible responses. Regardless of what’s in the black box.

But there is no getting around characterizing the source, if not just relatively.

3 Likes

But that’s what I meant. For blackbody radiators we know this, IF we can measure the temperature. Glenn basically used this approach. With uncertainties as to which temperature the lamp actually was at and with a tilted baseline that could not be fully compensated for when testing different temperatures for a better fit.

Yup, I fully agree. I think this is the tedious part though. Maybe it’s also expensive…but then again, Glenns work shows that ‘good enough’ doesn’t have to be expensive.

1 Like

I’m still struggling with some of the basic arithmetic, however. I’ve visited the grating efficiency a couple of times, yesterday most recently, and it doesn’t improve, the result max DE 3.8, worse than the no-grating, just-power 2.8. I think I 'm not doing it right…

I received an email from the Open Film Tools principal yesterday, with permission to use their data. He also asked about my grating source/application; I was embarrassed to report that I did none.

Which makes sense - as long as there is not a nonlinearity in the system somewhere (which you can easily verify by making sure you have some headroom in your channel values before you hit clipping), it would be expected that any profiles derived from an SSF would be relative, not absolute.