It’s more about being pleased with the resulting image than with the solution. The solution is a tool, not a result. And it has often to deal with technical constraints that are nice for nobody. I’m pleased with being able to drive only 50 min to go see my grand-parents, I’m not pleased with having to own and maintain a car. If really not happy, I could take the train + bus, but I’m on for a ride of 2h30, so it’s not a solution to the same time-constrained problem. How it feels is much less important than what it allows to do.
Bike shedding with pig-headed people and having to repeat the same info is a considerable loss of my time, you have no idea. On certain days, I do nothing but answering to people.
Yes. But dicussing with whom ? I’m okay to discuss with people who have experience and skills, if I know I can trust their eyes and I’m confident they are aware of the problem in all its complexity. Taking every internet rando’s opinion who has edited 3 pics/week for the last year and struggles with basic color theory is out of question. Show off portfolios guys. The property of skilled people is to be able to produce a good result even with shitty software, the difference software makes is the time required to go from intent to result.
For fuck’s sake, this is designed to ensure C^2 continuity over the full range while having a control over the rate of convergence toward the bounds, nothing more, nothing less. And there is an alternative with 3rd order curves. How many times do I need to repeat myself ?
Alternatives imply they try to fix the same problem under the same constraints. Not a simplified/trimmed version thereof.
You are right, and yet this request is not realistic given current resources.
Design is to be done against SMART tasks:
- specific,
- measurable,
- assignable,
- realistic,
- time-related.
Any non-SMART design goal is not suited for a design process but for a political speech. That’s why I scream everytime I read “intuitive” : it has none of these properties. At that point, I believe algos should have a goal in terms of the image properties they aim at controlling, constraints as for what they need to care about (what you call robust), and that’s all. Consistency is going to be difficult in an app that is 10+ years old and coded in sediment layers. Orthogonality is paramount.
You can replace filmic by a base curve or by a 3D LUT if you want. You just need to mind where that scene-referred to display-referred transform happens in the pipe.
That scaling is achieved by the exposure module, there is no auto-scaling aside from that. It’s an old assumption that middle grey is to be met at 18%. Regardless of the DR, we use 18% as a pin-point. Then:
- if HDR or scene-referred, we unbound the white value
- if SDR or display-referred, white value is bounded at 100% display peak-luminance.
That makes things easier because we know that, after this scaling, luminance ranges are correlated between all spaces, and all we need to care about after the pinning is the bounds of the DR, which are variable between spaces (while middle grey is not).
Yes, that would be an issue. We need to start the pipeline with values bounded in 0-100% sensor-referred to spot raw clipping (RGB = 100%), to apply denoising (scaling changes variance and voids noise profiles) and for some kinds of non-linear input profiles (LUTs) that can’t be scaled.
So we start in bounded linear sensor-referred, then convert to unbounded scene-referred by linearly scaling to pin the greys, then convert to bounded display-referred by whatever method (simple clipping out of range, or clever tone-mapping).
yes.
Because, since the white point is kept as-is and the black point moves by the same amount as the grey-point, you necessarily expand the DR.
Nope. But anyway, changing the grey point is now discouraged.