Scene-reffered and Display-reffered – Part 1

Excuse my bad english :wink:

Cinema and Photography :

When you watch a movie, it’s (was) often in a movie theater (Cinema - often called “Salles obscures” in french). Now, with streaming, we often watch movies at home. The evolution of TV (or projector) technology - dimensional, OLED, etc. - makes it possible to feel at home “at the movies”.

Let’s note the differences (I’m not talking about sound):

  • At the cinema, the room is dark, the screen very bright, the background dark (admittedly, there are rare outdoor projections).
  • At home, the room is slightly dark, or fairly bright (living room during the day), the screen is smaller and, depending on the technology, more or less luminous, and the background corresponds to the average luminance of the room.
  • The creation of the film requires a great deal of work in terms of shooting, with different lighting (studio, exterior, night, day, etc.) and editing. Nevertheless, the hundreds of thousands of images are not edited one by one, but as a whole.
  • the film tells a story. People come to see it for a variety of reasons: directors, actors, story, screenplay, etc. Rarely do they keep track of this story.
  • It wouldn’t occur to anyone (if not an original) to scrutinize the cinema screen with a magnifying glass to see the details.

This evolution in the world of cinema (movie theaters, streaming, etc.) led the industry to design ACES - Academy Color Encoding System - and associated standards to enable films to be easily distributed and adapted to a variety of media (movie theaters, TV sets, projectors, smartphones, etc.).

The concepts of “Scene-reffered” and “Display-reffered” are used. In brief:

  • Display-reffered: all color judgments or adjustments on set are made in a given color space, the one through which the projection is made.
  • Scene-reffered: the system takes into account the color space of the imported media. The references are the lights and colors of the shoot.

When we look at photographs, it can be:

  • In an exhibition with a fairly large paper support, often 100cm x 70cm, or more, or less. The lighting is ambient light, the exposure avoids sunlight and reflections. The background is often neutral. Staging is done to enhance the images.
  • At home on your computer, the screen is usually 35cm to 60cm diagonal. The lighting is that of the room in which the computer is located. HDR screens are rare and usually of average quality, and the background varies with the course of the day (backlight, darkness, etc.). It is also possible to view photos on the family TV set, but this is a rather unusual approach… It often has the same characteristics as a monitor, apart from the larger size and sometimes better quality.
  • At home, or out and about (transport, restaurants, bars, etc.) on a smartphone. Screen dimensions are small, with a diagonal of 12 to 20 cm. Outside, it’s often hard to see well when sharing an image with friends. The background is rarely dark.

A photo is often a favorite. It can be part of a story. A few years ago we kept track of this story (paper photos), now?

  • There’s the photo of the “the man in the street” (smartphone) who shoots JPG (without knowing it), and the photo of the “specialists” who have a relatively (or very) expensive camera, who shoot Raw and exchange views on forums, and observe (not all) images with a magnifying glass.
  • For the “specialists”, processing (even if the desire for a perfect photo at the time of shooting is an important factor) is part of the job. We talk about white balance, demosaicing, noise reduction, etc… A whole host of terms unfamiliar to “the man in the street”.
  • These photos are taken under a wide variety of conditions and for a wide variety of purposes: studio, apartment, landscape, portrait, macro, astrophoto… Natural light, artificial light, morning, evening, night…

Since 1997, researchers have been asking themselves “how to improve the system” for capturing and rendering images.
They came up with Ciecam97 (1997), which then became Ciecam02 (2002), and now Ciecam16 (2016).

They divided the approach into 3 processes:

  • That relating to the shot and its environment: scene (or source) conditions. Here we return to the idea of “scene referred”. Exposure conditions and illuminant temperatures, etc. are taken into account…
  • Image processing to adapt it to the environment, with notions such as simultaneous contrast, differences between lightness and brightness, differences between chromaticity, saturation, colorfulness, etc.
  • Viewing conditions - here we return to the idea of “Display-referred”: background environment, media luminance, etc.

As you can see, Ciecam doesn’t oppose “Scene” and “Viewing”, but at least uses both.

Of course, there’s nothing to prevent you from using the principles and certain tools or standards of ACES in photography (ex - ICC profiles).

Of course, you can do without it (Ciecam), but take a look at your process and you’ll see that there are often 3 stages (Raw): the Raw part (in fact, in Rawtherapee, the files associated with this processing often have ‘source’ in their name, like Rawimagesource) for the source part, then more or less sophisticated processing with curves, logarithmic conversions, contrast, then output to a media (monitor, printer…).

I’ll tell you the rest of the story in Part 2. The particularities of Ciecam in Rawtherapee.



Thanks a lot indeed for this in-depth explanation and also for your recent tutorials on this forum!
Looking forward to the next part :slight_smile:

BTW, I have never used darktable but It looks like the “scene-referred” method is often the preferred one these past days for this software.


Thank you for the background. I find your tutorials thoroughly enjoyable (even though right now I’m not an active RawTherapee user). Now, you left us with a nice cliffhanger… :smiley: Looking forward to part 2!

1 Like