In general I think that reviews comparing a program with [insert your favorite program here] are problematic, because the reviewer is naturally leaning toward what he already uses and knows. A good review should explain the software functionality and user experience on its own terms and then draw any comparison as context.
But I have a bigger issue when the reviewer doesn’t even take the time to learn the program or even process an image from start to finish. A particularly horrendous example was a DT review on ThePhoblographer where emotions got the the better of the reviewer because she assumed DT was a LR clone and encountered problems while editing a Fuji file on her Mac. After that a whole slew of inaccuracies and misstatements ensued because the author didn’t take the time to follow up on her negative experience. In the end she concluded that DT wasn’t suitable because you actually have to learn how to use the program.
It’s one thing for a user to decide a program isn’t worth the effort, but a reviewer has an obligation to do the research to credibly advise the reader.
That isn’t a misreading at all. Merely linking to a library that is GPL-licensed makes your software a “derivative work” that also falls under the GPL. Fortunately…
LGPL now stands for “Lesser” but it is effectively for libraries, and the key difference is that linking to it from another program does not cause that program to be a derivative work. Fortunately, most libraries are LGPL and not GPL. A GPL-licensed library is a no-no in a corporate environment unless you take measures to contain it. (For example, communicating over a network or pipes or some other form of IPC is not linking, so creating a wrapper/shim around the library that is open source is fine)
As a small side note, while LGPL does not explicitly forbid static linking, its requirements for static linking are so impractical (end users need to be able to relink a version of the library they compiled themselves with the rest of your application) that it’s effectively forbidden for all practical purposes.
Fairly rare corner case here, other than personal hack projects.
Yup. Some of that FOSS was explicitly developed for that reason (such as bionic vs glibc)
I have no doubt that you can get great results with Darktable. However, until now I have used commercial tools. For one thing, my main computer is still a Mac. For another, it seems to take a lot more time to learn Darktable. And time is a problem. But I’m still interested, especially if you can also get fast results with Darktable.
There are real challenges with teaching modern darktable, and part of that is indeed due to its open nature.
There used to be a terrific “Open Source Photography Course” by Riley Brandt. It taught darktable 2.7 (or something like that) brilliantly.
But darktable moves fast, and five year old teaching material is no longer applicable. Commercial software is generally more concerned with UI stability, which makes it easier to teach.
And furthermore, commercial software often spends a sizeable marketing budget on bespoke teaching material, and that’s obviously not something we do here (valiant efforts of some community members notwithstanding).
That said, darktable’s manual is excellent, and some of the YouTubes on darktable are of extremely high quality, albeit swamped by a sea of videospam, regrettably.
I generally consider that whole “private organization” exception/language such a scary grey area that one should assume that anything that is a GPL derivative work could become public. Especially since I’ve seen that exception/language abused so many times to facilitate GPL violations (google Chad Goodman and Anthrax Kernels for example)
We most definitely don’t use GPL libraries for internal tools here. (e.g. why we looked at CGAL briefly but decided against it, too much risk)
Mark made a very basic how to quickly… further explained on his blog
Boris just made a nice video…you can see that he makes edits quite quickly…
The key is to derive some presets that assist your look and if universal you can auto apply them to help speed things up…
Personally I use these with a sort of key frame approach… pick an image do a full edit and then just copy the edit to then next images that are similar and then edit the next image that needs a new approach and again copy to similar images… walking through a set like that… but everyone’s needs are diverse…
I think darktable in particular gets a hard time with reviewers because many raw editing programs have similar controls so may feel quite familiar, but darktable does quite a few things differently, so requires more initial effort. A lot of the tools in darktable are quicker and easier to achieve good results once you have got to grips with them, but many reviewers just dismiss them because they don’t follow conventions.
Conversely, I now find that when I try a piece of commercial software out of curiosity, I get frustrated that I can’t quickly sort the image using tone equaliser or colour balance, and it will get binned quite rapidly.
Talented people can recognize good software regardless of price tag. Untalented people believe the most expensive camera will make them a great photographer, the most expensive car a great driver and the most expensive software a great editor of images.
It doesn’t matter what technical means you use to “contain” it - if you are providing functionality that depends on the GPL code, you have made a derivative work.
And if your code doesn’t do anything useful without the presence of the GPL code, then I think there is a strong argument that is a derived work in itself.
Not really. It’s perfectly fine to run your proprietary software on (GPL) Linux. It’s perfectly fine to sell a raspberry Pi, running your proprietary software on (GPL) Linux. It’s perfectly fine for your proprietary software to call (GPL) bash scripts, or GPL executables.
It’s even perfectly fine to link your proprietary software to GPL code, so long as you’re only doing that in-house. You can even sell a web service that does this. The only thing you can’t do, is link your proprietary software against GPL software, and distribute it.
And even there are caveats. For example, the Linux kernel is explicitly exempted from the linking rule, you may distribute your proprietary code that links against the GPL Linux kernel, and distribute it. The same goes for the GNU GCC glibc.
A good example of a modern art foss program is Blender. Nobody can deny its relevance and if they do, all they need to do is head out to https://fund.blender.org/ this page.
Note that the Affero GPL was designed to work around this.
Note that the nature of the AGPL is such that if you interact with a GPL component over a network, you are entitled to the source code for the AGPL component. But the AGPL does not apply to the software interacting with AGPL software over the network.
As an example, if you passed a RAW image over the network to a GPL program that processed it, and then ingested the result into a proprietary program that performed further processing, that would be allowed.
Now if you directly mapped a particular function call parameter to serialized network data, you’d be pushing the limits.
But if the topic is what should be expected of new users/reviewers in their judgement of dt’s usability and quality of rendering, we should not be surprised that those who don’t know dt from beforehand consider dt in a less than optimal way since the initial UI set-up presents itself “out of the box” in a non-optimal way for beginners.
E.g. is the modules panel configured by the Default preset rather than the Scene-referred preset, and if you activate the scene-referred preset, the Diffuse and Sharpen module that you point to is not visible “out of the box” (unless you come to make an extra click on the tab).
In general I find the dt Ui to be fine, but the dt settings that are initiated upon installation of dt ought to get a work-over with new users in mind.