small patch preview

I am wondering if it would be possible to select only a small part of the image for processing and work on that (magnified), with the assumption that the calculations only happen on that part.

This would be especially useful for fine-tuning computationally intensive modules like diffuse or sharpen.

I understand that I can just crop, but then I lose the original crop settings since that module does not allow duplication. A small “preview” module that just crops to a 128x128 rectangle or similar would be great, I would just toggle it. But maybe this is already possible and I am missing the obvious, suggestion are welcome.

1 Like

Maybe the only issue here from a hint of your use case is evaluating sharping or diffuse at any zoom other than 100% will involve pixel interpolation ie some scaling and thus also potential artifacts so really for that sort of evaluation I think you would have to stick to 100%

I would be fine with pixels as blocks (as currently happens when zooming in).

You could store the original crop settings as preset and reapply later.

Can you explain what you mean by this I still really don’t get it…are you trying to have to do computations on less pixels?? Does cropping even do this??? Or does it just restrict what you currently see so the computations are still on the whole image…I would think yes but this could be wrong…

And finally again trying to do any detail assessment at other than 100% will not be accurate at all due to scaling for the display so not sure what advantage this window would be…

I think you would go to 1:1 and then drag the box in the nav preview to move it around the image…this would be a 1 to 1 map of your pixels and accurate to assess your modifications…

I might be missing something unless you are trying to some how cut down on computations made and I am not sure how that would be managed??

Yes, precisely.

I don’t think so, it is as slow as for the whole image.

Okay I think I get what you are after…so based on a selection you would create say a buffer with only that data and the module with the focus would do calculations on that "representative"area rather than the whole image…for the purposes of evaluation??

Exactly.

To make it short. You probably know, that dt always calculates data only of the area visible in the darkroom window part? That’s what we do for performance reasons (roi concept) since long …

The downside of this concept are artefacts or algorithms not working stable at all.

4 Likes

Thanks for explaining I really wasn’t sure …I always wondered what performance hit scaling added…assuming that if you were 1:1 and panning then with no scaling that should be faster, artifact free and more accurate where as zoomed in and out required scaling and might suffer from artifacts and a performance hit…thanks…

I don’t understand quite well your idea, but I assume you are looking for something similar to was The Foundry Nuke (Commercial software) do. It’s create a proxy file for images and the viewer only process the visible part of the image if it’s not at 100%

This is a video sample

but here are the answers to your questions.

I recall reading something like this, but subjectively the processing speed is not proportional to image area. Maybe some modules use tiles larger than the view?

Nope.

You can’t expect proportional performance because of various reasons, first point is the pipeline overhead. That is significant. Also the “… calculates data only of the area visible in the darkroom window part …” is a slight simplification. Other significant points are pretty technical and difficult to explain.

2 Likes