@kofa
Thank you for your comment
The problem of documentation is far from simple, apart from the media aspect (IP address, server…)
On the one hand, there’s the problem of the developer’s language and, on the other, that of the official language, English. Whether it’s an automatic translator (DeepL, Google, etc.) or a human translator, the question arises as to the technical vocabulary and the accuracy and respect for the meaning of the initial purposes and content. In the case of a human translator - who needs to be found - it’s not a given (I know a few - almost - bilinguals) that the person used to translating literary or economic documents is proficient in digital photography.
On the other hand, it would be highly desirable for this “translation” to be accompanied by a dialogue:
- on graphic architecture, labels, tooltips, etc.
- on understanding what it’s for, how it works
These two aspects can lead to a reworking of the GUI, but also of the algorithm in certain cases.
The third point concerns the content itself:
- practical: ‘what to touch’? - sliders, etc. and with which media: text, images, videos, etc.
- algorithmic: what principles are used and why?
- scientific: on what mathematical and physical bases ?
- and what part do we give to collective learning? in french “Retour d’Experience - Rex” whose meaning is poorly conveyed by English “feedback”… I made an attempt in the spring of 2024, but for me, this is not it.
*Rawtherapee Processing Challenge feedback
Should we produce complete, separate documents? For different populations ? etc. In the case of the last GHS document, everything is mixed up with a completely arbitrary bias on my part.
Of course, using Pull-Requests, for example, will make it possible to expand and share content. But the same questions will arise:
- what is the skill level of the contributors, what is their level in English and how well do they understand the nuances of the original language, and technical problems (algorithm, etc.)?
- these pull requests are likely to lead to a multiplicity of points of view, whereas in the case of code, the principle is simple and usually decided by two people.
- of course, this dialogue is a source of richness and collective learning, but we run the risk of spending a lot of time on it and disappointing the participants.
Of course, AI can also appear…
But in the end, who will arbitrate, validate and on what basis?
And who, when and how are updates made?
Another not insignificant question. Does the new Rawpedia system (Hugo, etc.) have a powerful search engine?
How does one know that something exists? For example, who knew about “Rawtherapee Processing Challenge Feedback?” How do you search for something other than the obvious keywords? Will the system recognize free text, and of course give relevant answers as Firefox or Google do?
Indirectly, this raises the question of Rawtherapee’s architecture - both internal and visible via the GUI. What’s in what? For example, the new “Contrast Enhancement” in Abstract profile (Color Tab) is entirely Wavelet code - the function is embedded inside the ‘ipwavelet.cc’ code, but the GUI interface - simplified so as not to scare people - is not in Wavelet levels (Advanced Tab). What correlation do we make between the usage inherited from our habits…since 2006, the practices we (who?) wish to promote? and the architecture of the ‘engine’ and ‘GUI’ code.

Jacques