Darktable video tutorials

For french speaker, there’s this amazing youtuber. He make awesome tutorials for a lot of F/OSS projects related to photography.

3 Likes

Great find! Thanks for the link.

This is something that I need to get my arse in gear and start making: I have lots of ideas for Darktable video tutorials such as editing 32-bit EXRs, luminance masking and so on

4 Likes

Ian, it would be great if you could create some Darktable video tutorials, apart from Robert Huttons (which are good) there’s not much out there. The ones on the Darktable Blog are good but are old.
So look forward to them. :smile:

2 Likes

I second this notion… :slight_smile:

1 Like

Right. Just need to figure out how to record a smooth screencast and then learn how to edit video.

Don’t forget, you’re not alone! We’re a network of folks ready to help in some way. :slight_smile:

I’d be happy to help with capturing tips or editing as needed. (All my years of Blender fiddling around can finally pay off!).

Some time ago, a user on G+ pointed me to Open Broadcaster Software: it is open source and works on multiple platforms.
I’ve not tried it yet, since I had already started using gtk-recordmydesktop and it was fitting my simple needs. The output can be directly uploaded on youtube without re-encoding.

Concerning video editing, I personally only know and have used avidemux.

Hope this helps.

OK, editing tips needed then, please.

kdenlive might be something for the basic editing.

Yes Kdenlive is good, or Openshot.

If you are a little familiar with Blender, then here are two good tutorials with the newest features and workflow.

It all depends what kind of clips you wanna make.
For example there is Lightworks. A program for professionals. Pulp Fiction and Titanic were edited with it.

The free version let you output “only” 720p and only with youtube standardcodecs.
For me its not practicable, but when you only want to post it on yt it is probably ok.

If you want I can help you with the editing.
I am looking too for a good editing program in linux. It is a good chance to get some practise again.

1 Like

To make those videos, I used Kazam to record my desktop and Kdenlive to edit. I also use avidemux from time to time for simpler tasks like file splitting.

1 Like

A post was split to a new topic: Street Photography Editing with darktable

I discovered the darktable tutorials from Harry Durgin a while ago (YouTube, Google+), which are extremely impressive. But there’s something I do not understand (maybe I missed it in the videos) and maybe somebody here has seen them and can answer my stupid questions.

https://www.youtube.com/watch?v=qNNm4g-mUKU

After a couple of processing steps, he is exporting the image as tiff and reads it back into darktable to continue editing. He does so about 4 times per picture. IMHO this totally contradicts the non-destructive workflow within darktable. Why does he do so?

That was a nice watch. I have so much to learn!

From what I gathered, it might be largely due to his workflow, which seems to be based on stages going from basic overall adjustments to more specified/special ones. Also, Darktable applies all modules in a specific order irrespective of the order you apply them in, so I suppose it’s possible that if he didn’t export to TIFF along the way, the same operations he subsequently applied may not have the same effect. Not sure…

[quote=“chris, post:15, topic:142”]
Why does he do so?
[/quote]From what I recall he did it for performance and ‘cleanliness’. I personally try to avoid it, but I also tend to do less adjustments than he does. With that said you can still have that workflow be non destructive, you just need to reopen all subsequent images again to make edits propagate.

Harry explains briefly at this point in his sixth Darktable video. He mentions wanting to know to what his parametric masks apply in light of Darktable’s fixed order of application of modules, though I wouldn’t have thought that would necessarily be an issue what with Darktable’s picker and ‘display mask’ features. Anyway, it seems to come down to “taking control of [the] pixel pipe”.

I’m enjoying seeing how he uses the channel mixer to rebuild the luminance channel and the high and low pass modules to control edges and contrast, none of which I’ve yet tried on my own images.

Thanks, Jonas & David, for the explanations, which lead to follow-up questions.

This is not true for some operations. An example: you are not allowed to use the crop, perspective correction or lens correction modules in the first or intermediate stages, if you want to change their parameters later. For cropping and perspective correction this is not a big deal since you could do it in the last step (but this may slow down your processing/workflow). For lens correction (and the automatic perspective correction that is coming up) it is a bigger deal, since they rely on the lens parameters and therefore you would have to do them in the first step, since these may be only correct for the raw pixel data and not for the exported image, especially when these corrections get mixed through all the steps. You could argue that these corrections can be don in the fist stage and will not be touched any more, but sometimes I decide to remove e.g. lens correction after the processing since the uncorrected picture is more appealing.

Is this a real issue? I understand that the parametric masks rely on the pixel data at the pipeline position of the module, so tone mapping would not be respected by subsequent editing steps using masks, if the module’s pipeline position is earlier to the tonemapping step (he does a lot of shadows/highlight corection using low pass filters, this might be a problem in that case). Are there possibilities to circumvent the problem? E.g. the combined masks could help?

Working with these modules in combination with different blending modes is extremely impressive and something I definitely have on my agenda to learn. I mean, not the technical side, this is not that complicated to understand, but how to utilize them to improve the image. He is doing all those little steps, each of them changing only very little, but leading to an impressive result. I guess it needs a lot of experience to master these techniques. Are they emulating analogue techniques?

[quote=“chris, post:19, topic:142”]
This is not true for some operations. An example: you are not allowed to use the crop, perspective correction or lens correction modules in the first or intermediate stages, if you want to change their parameters later. For cropping and perspective correction this is not a big deal since you could do it in the last step (but this may slow down your processing/workflow). For lens correction (and the automatic perspective correction that is coming up) it is a bigger deal, since they rely on the lens parameters and therefore you would have to do them in the first step, since these may be only correct for the raw pixel data and not for the exported image, especially when these corrections get mixed through all the steps. You could argue that these corrections can be don in the fist stage and will not be touched any more, but sometimes I decide to remove e.g. lens correction after the processing since the uncorrected picture is more appealing.
[/quote]I wasn’t clear enough. What I meant is you can just open the first ‘stage’ raw image again, export, then reopen the exported image and rinse and repeat to make the change propagate. The subsequent editing steps will not be lost unless you overwrite the file.

1 Like

That’s true, what I wanted to emphasize is that it is a workflow that does not suit every requirement. This technique can help to overcome restrictions posed by darktables design, but you have to be aware what works and what does not, and therefore you have to restrict yourself. One could argue that it would be beneficial to be able to shift those modules up and down the pipeline that are not heavily interconnected (e.g. those used by Harry Durgin), but I guess that the GUI for that feature would introduce a lot of complexity. I wonder if Harry Durgin could come to the same results without those intermediate steps and how that workflow would differ.