About this: I’m actually a bit to blame for agreeing to implement this feature.
in practice, if you want to have a random value in an open range ]a,b[, it’s better (and faster) to write for instance u(a+eps,b-eps) with eps=1e-5 rather than u(a,b,0,0).
At the end, what the algo does is indeed add/subtract an epsilon to the specified bounds, but it does this every time you call the function, while the bounds can often be precalculated as constant values.
So in practice it’s not that interesting to use.
I prefer not to add these options to the new functions rand() then.
Concerning the srand() problem. I will see what I can do, but in any case, remind that for large vectors, the filling with random values will be done in parallel, and in this case, there are no ways to ensure the ordering of the filling is always the same.
So, in the multi-threaded case, srand() is practically useless.
Regarding epsilon value. Wouldn’t it be nice to implement C++ nextafter, and the backward version of that? There’s also use in case of modulo operation here, so you can preserve the value to modulo by.
So, my advice here is not to apply a patch that will make srand() works only for small vectors but not for large ones. There is clearly an undefined behavior here, and it’s the kind of nightmare a developer don’t want to have to deal with.
Searching for ‘0x’ yields no result on Math Expression documentation. It would be nice if there was some points about hexadecimal literal, and binary literal there.
GNU Image Manipulation Program version 2.99.16
git-describe: GIMP_2_99_16
Build: org.gimp.GIMP_official rev 0 for windows
# C compiler #
Using built-in specs.
COLLECT_GCC=C:/msys64/mingw64/bin/cc.exe
COLLECT_LTO_WRAPPER=C:/msys64/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/13.1.0/lto-wrapper.exe
Target: x86_64-w64-mingw32
Configured with: ../gcc-13.1.0/configure --prefix=/mingw64 --with-local-prefix=/mingw64/local --build=x86_64-w64-mingw32 --host=x86_64-w64-mingw32 --target=x86_64-w64-mingw32 --with-native-system-header-dir=/mingw64/include --libexecdir=/mingw64/lib --enable-bootstrap --enable-checking=release --with-arch=nocona --with-tune=generic --enable-languages=c,lto,c++,fortran,ada,objc,obj-c++,jit --enable-shared --enable-static --enable-libatomic --enable-threads=posix --enable-graphite --enable-fully-dynamic-string --enable-libstdcxx-filesystem-ts --enable-libstdcxx-time --disable-libstdcxx-pch --enable-lto --enable-libgomp --disable-libssp --disable-multilib --disable-rpath --disable-win32-registry --disable-nls --disable-werror --disable-symvers --with-libiconv --with-system-zlib --with-gmp=/mingw64 --with-mpfr=/mingw64 --with-mpc=/mingw64 --with-isl=/mingw64 --with-pkgversion='Rev7, Built by MSYS2 project' --with-bugurl=https://github.com/msys2/MINGW-packages/issues --with-gnu-as --with-gnu-ld --disable-libstdcxx-debug --with-boot-ldflags=-static-libstdc++ --with-stage1-ldflags=-static-libstdc++
Thread model: posix
Supported LTO compression algorithms: zlib zstd
gcc version 13.1.0 (Rev7, Built by MSYS2 project)
# Libraries #
using babl version 0.1.107 (compiled against version 0.1.107)
using GEGL version 0.4.47 (compiled against version 0.4.47)
using GLib version 2.76.3 (compiled against version 2.76.3)
using GdkPixbuf version 2.42.10 (compiled against version 2.42.10)
using GTK+ version 3.24.38 (compiled against version 3.24.38)
using Pango version 1.50.14 (compiled against version 1.50.14)
using Fontconfig version 2.14.2 (compiled against version 2.14.2)
using Cairo version 1.17.8 (compiled against version 1.17.8)
Correct me if I’m wrong, but I think G’mic will only use what is needed by the filter you are currently using. Why would it use more ram than needed?
And I think filter devs will preferably try to use the less ram posssible so it can run on most computers.
But using higher resolution images and many layers will obviously use more ram.
That’s not necessarily true. Optimizations comes after. Readable and maintainable code comes first, then optimizations. Sometimes, I will optimize my work after new findings or is time.
My first time coding Fragment Blur, it took a day just to work on a decently-sized image with 64 GB Ram, but I got it down to near compiled speed with C# and multi-threaded at that. Another one is Stitch Distort, I work on readable code, but as soon as I have new findings, I was able to optimize it. And so on.
That’s why I said preferably
I might not be the 1st goal though.
I know I have to keep everything low since I only have 8gb of RAM… Got enough crashes because of that heh.