Looking for a gmic + python scripter to help me out here.

Good suggestion. I don’t have nearly the to-do list as @David_Tschumperle, but I sorted mine. By priority. I’ll learn one of these days…

2 Likes

I wanted users to have the option to use non-square seed image. Hence, why you’re seeing hyp_img, and the initial n being based off solving b for a=sqrt(2b^2). Non-square images is not quite the correct way of using this script, but eh I don’t really care about whether users will use a square image or not, and I do know how to create the option of forcing a non-square image to be a square seed.

I did that, it still doesn’t seem to work.

In addition, that can be simplified to:

NN=(N-1)^2
increment=1/NN
incrementValue=-NN
zeroV=4*NN

Here’s the new code which is a lot easier to read.

rep_hamiltonian:

repeat $! l[$>]
 if s>1 to_gray. fi
 ge. {avg(im,iM)}

 hyp_img={sqrt(sqr(w)+sqr(h))}
 n={int(sqrt(($hyp_img^2)/2))}
 nn={($n-1)^2}
 npts={sqr($n)}
 npts_n={$npts-$n}
 npts_1={$npts-1}
 
 increment={1/$nn}
 incrementValue={-$nn}
 zeroV={4*$nn}
 
 $npts,$npts
 
 $npts,5,1,1,"begin(
   const inc_v="$incrementValue";
   const zv="$zeroV";
   cv=[crop(#-2)];
   initial_pos_x=[0,0,0,1,"$n"];
   initial_pos_y=["$n",1,0,0,0];
   v(a)=inc_v*cv[a];
  );
  y==2?(
   i(#-1,x,x)=zv;
  ):(
   pos_x=x+initial_pos_x[y];
   pos_y=x+initial_pos_y[y];
   i(#-1,pos_x,pos_y)=v(x);
  );"
 k..
 eigen.
 k.
endl done

Also, this doesn’t work either:

 increment={(1/($n-1))^2}
 incrementValue={-1/$increment}
 zeroV={4/$increment}

EDIT: unroll instead of crop does not work either. I’m stumped.

1 Like

I’ve been able to implement a first version of the Arnoldi method for approximation of the largest eigenvalues of a large matrix. Looks good (and fast :slight_smile: ).
I’ll try to make a version of it working as eigen, and another version working as eig() in the math parser.

Stay tuned! :slight_smile:

2 Likes

So… I’ve added a command meigen in the stdlib, that approximates the m largest eigenvalues of a square matrix (passed as a selection of the command). It’s really faster than computing all the eigenvalues, but it computes only an approximation of the eigenvalues (the bigger ‘m’, the better approximation). Also, it does not return the associated eigenvectors.

Some examples of timing on my machine:

  • Approximation of the 64 largest eigenvalues for a 1024x1024 matrix takes 0.181 seconds (while eigen takes 36.09 seconds).
  • Approximation of the 32 largest eigenvalues for a 9000x9000 matrix takes 9.288 seconds.

I’ve also added a new equivalent math function meig(A,m,n) that returns a vector(#m) of the approximated eigenvalues from the square matrix A (with size n x n). Include math_lib in your expression to be able to use it.

At this point, I want to say that:

  • I’ve not tested it thoroughly (just on random matrices). Not sure how it behaves with ill-conditionned matrices.
  • I hope this will be useful for you, to solve the OP initial problem. I haven’t followed all the discussion, so I can’t really tell. Let me know if you succeed in using it.
  • If you’re interested by how I did it : https://github.com/dtschump/gmic/commit/178a52a6bd1fde6c0eda58d6939fc3e97a83498f

And of course, as always, a $ gmic update will be necessary to get this new command.

4 Likes

Ah me. Time to rename G’MIC as ‘David’s Delightful Distraction Machine.’ Methinks he worked past midnight. I’m five hours behind CET…

Five hours behind him, I’ve been fooling around with a diagonalizer.

  1. It starts with images of arbitrary dimension and spectra.
  2. Take the lumenance, threshold it to make the seedmap image, then unroll it.
  3. It is my wont to interpret this as the smallest, most offset +/- diagonal.
  4. Calculating N is a two-stepper.
    a. Find the ‘ideal N’ by the quadratic formula.
    b. It has a fractional part, almost always. I prefer to round this ‘ideal N’ up to the next highest even number, and declare this to be N. I find that even N makes padding and positioning these diagonals somewhat more tractable later on.
  5. Having rounded up N, it is necessary to resize the length of the smallest diagonal so that it matches. N*(N-1) tells me what the new length is. It is resized without interpolation and periodic boundary policy; might be more appropriate to change that to black fill policy instead, though no harm seems to come of it.
  6. The lesser diagonal is essentially a copy of the smallest diagonal, resized to N^2-1, no interpolation and periodic boundary policy as before.
  7. The main diagonal, N^2 is created and filled with 4*(N-1)^2
  8. The the offset diagonals are scaled by -(N-1)^2
  9. Applied -diagonal to convert these three vectors to diagonal matrices. Though disassembled, and of different sizes, The data for the upper diagonal of the Hamiltonian is in hand. It is necessary to duplicate, pad and shift the two offset diagonals so that, later on, they can be added to the main diagonal.
  10. There follows finicky bits of matrix copying, expanding by xy and cropping to position the off-diagonal matrices to be in their appropriate places, adding the pipeline together, consolidating to a single image, the Hamiltonian, which I wrote out as a float .cimag

@David_Tschumperle’s brand new toys (!!!) notwithstanding, I harnessed the Python side to find select eigenvectors, to minimize testing to just the diagonalizer. I asked for five solution vectors, exporting the solution eigen_vectors to output .cimg files for post-processing. Runs for N=130 was about thirty seconds.

Using this seed image:
quantum_torus
I obtained these select images (filtered by post-processing, given below)
hamvecm_0
vector_0, N=130, scaled up to 512 and colorized via -map rainbow.
hamvecm_1
vector_1, the next door neighbor..
hamvecs_0
vector from a smaller seed image of the torus, N=75, scaled up and colorized as the others.

This is the diagonalizer:

gmic                                            \
   -input quantum_torus.png                     \
   -luminance.                                  \
   -threshold. 50%                              \
   -unroll. x                                   \
   hy='{(1+(sqrt(1+4*w)))/2}'                   \
   n='{round($hy/2,1,1)*2}'                     \
   nn='{($n-1)^2}'                              \
   iv='{-$nn}'                                  \
   zv='{4*$nn}'                                 \
   -resize. '{$n*($n-1)}',1,1,1,0,2             \
   -mul. '{$iv}'                                \
   +resize. '{$n^2-1}',1,1,1,0,2                \
   -reverse                                     \
   -input '{$n^2}',1,1,1,'{$zv}'                \
   -name. diag0                                 \
   -move.  0                                    \
   -diagonal                                    \
   -expand_xy. '{$n}',0                         \
   +crop.   '{$n}',0,'{w-1}','{h-$n-1}',0       \
   -name. diagmN                                \
   -move. 0                                     \
   -crop. 0,'{$n}','{$n^2-1}','{h-1}',0         \
   -name. diagpN                                \
   -expand_xy.. 1,0                             \
   +crop..   1,0,'{w}','{h-1}',0                \
   -name. diagm1                                \
   -move. 1                                     \
   -crop.. 0,1,'{w-1}','{h}',0                  \
   -name.. diagp1                               \
   -add                                         \
   -output. hamilton_test_'{$n}'.cimg,float     \

This is the post processor:

gmic                                            \
   -input hamilton_test_130_out_0.cimg          \
   '{sqrt(h)}','{sqrt(h)}',1,1                  \
   -fill. [-2]                                  \
   -sqr.                                        \
   -normalize. 0,1                              \
   -pow. 0.75                                   \
   -r2dx. 512,5                                 \
   -normalize. 0,255                            \
   -map. rainbow                                \
   -display                                     \
   -output. hamvecm_0.png

I find the shapes eerie, compelling and interesting. Even with a known seed map, it is not quite certain what the solution vectors will look like.
Next step is to disconnect the Python bit and run with @David_Tschumperle’s new toys. Then we have an end-to-end G’MIC pipeline. Doubt that the diagonalizer is optimal; certainly could use a fresh pair of eyes. Memory usage balloons at one point to 16GB, for N>=150, so it won’t run on Granny’s Tandy TRS-80, probably.

That’s it for now. Everybody have fun.

3 Likes

I’m pretty sure your algorithm is broken somewhere. Well, or I inattentively read your post and you introduced some new settings

My pictures with this seed



2 Likes

Think you are right. Looks like horizontal scaling may be taking place. Thanks for raising the flag. Not sure when I can act on it though.

Probably not going to play in this sandbox for the first half of this week - Real Life chores and obligations press.

  1. As @weightt_an points out, there are (probable) distortions in how I’m rendering a eigen_vector compared to how his script renders it out. My suspicions run toward the initial calculation and rounding of N that I do. That, and the padding out of the offset diagonals when it comes to boundary policy. Currently I use periodic. That actually adds coefficients to the matrix not in the original unrolled image. Plausible, then, that in effect I make a slightly different seed well than what @weightt_an does. Solution to that is to switch to Dirichlet Boundary policy - i.e. consider off-image pixels as black. That adds no coefficients not already present in the original unrolled image.
  2. My diagonalizer can put up to five huge images on the image stack at one time. These are related to consumption of up to a 16 GB bulge. I like @Reptorian’s diagonalizer because it writes to one large image from smaller sources. Still think it is worth some time and energy to track down what’s going on with that approach because it is the more memory friendly one.
  3. Even with a defective diagonalizer, it is not that far away to hook together a G’MIC only pipeline. If somebody else doesn’t beat me to it, that would be my next destination on my a periodic bus trips to Schrodinger’s Well. Have fun!

@grosgood I"m wondering if you could dump the vectors utilized by @weightt_an python script and reveal the value of vectors. I would like a .zip of cimgs to see where I went wrong. If I could compare the values, I might be able to fix my script.

  1. I’ll use the seed image in post 92. The torus.
  2. I’ll use N=30 so that the Hamiltonian will only be 900,900,1,1
  3. I’ll generate the Hamiltonian from his script, which utilizes scipy.sparse.diags()
  4. I’ll save that out asham_N90.cimg and zip it up, posting the zip file here.

Not right now, though. Doing pay work. probably 6 hours from now, my evening.

Well, about 4 hours. They canned a meeting on me. I’m imperfectly disappointed.
Here goes:
well image = quantum_torus_small.png 30,30,1,1
N = 30
No_points N*N = 900
increment 1/(N-1)^2 = 0.0011890606420927466
incrementValue -(N-1)^2 = -841.0
zeroV 4*(N-1)^2 = 3364.0

The diagonals offset from main diagonals have two values, 0 or -841. The main diagonal has a value of 3364. These values are consistent with those computed above.

The first non-zero pixels for the five diagonals are:

diagmN 73,103             (diagonal begins at 0,30. Initial interval of zero-valued pix: 73)
diagm1 73,74              (diagonal begins at 0,1.  Initial interval of zero-valued pix: 73) 
diag0  0, 0
diagp1 74,73              (diagonal begins at 1,0.  Initial interval of zero-valued pix: 73)
diagpN 103,73             (diagonal begins at 30,0. Initial interval of zero-valued pix: 73)

These are consistent with quantum_torus_small.png. It’s first two rows are black (60 pixels) the third row is black from zero to twelve (13) 30 + 30 + 13 = 73

The well image, the Hamiltonian generated with @weight_an’s script, and 30 rendered eigenvectors, also computed from his script, scaled from 30->128, normalized and mapped with rainbow.
hamneigs.zip sha256 hash 0d95f76775232a6aedb8a3a6463eadc6cb9fbb060987c39f164ab6ca88e5ad96
Have fun.
hamneigs.zip (434.5 KB)

…I prefer to round this ‘ideal N’ up to the next highest even number, and declare this to be N…
…It is resized without interpolation and periodic boundary policy…

Better to round to the nearest than rounding up. This was adding two: N=s+2 instead of N=s for square images of dimension s. Changed boundary policy to be Dirichlet (off-edge pixels are declared to be black). That doesn’t inject repeated coefficients. Seem to have more reasonable vector rendering now…

hamvecm_0

My diagonalizer is a memory pig though. Still. Time to try @David_Tschumperle new toys…

2 Likes

I fixed the code for my own script.

rep_hamiltonian:

repeat $! l[$>]
 if s>1 to_gray. fi
 ge. {avg(im,iM)}

 hyp_img={sqrt(sqr(w)+sqr(h))}
 n={int(sqrt(($hyp_img^2)/2))}
 nn={($n-1)^2}
 npts={sqr($n)}
 npts_n={$npts-$n}
 npts_1={$npts-1}
 
 
 increment={1/$nn}
 incrementValue={-$nn}
 zeroV={4*$nn}
 
 unroll *. $incrementValue
 
 $npts,$npts
 
 $npts,5,1,1,"begin(
  const inc_v="$incrementValue";
  const zv="$zeroV";
  initial_pos_x=[0,0,0,1,"$n"];
  initial_pos_y=["$n",1,0,0,0];
 );
 y==2?(
  i(#-1,x,x)=zv;
 ):(
  pos_x=x+initial_pos_x[y];
  pos_y=x+initial_pos_y[y];
  i(#-1,pos_x,pos_y)=i(#-2,0,x);
 );"
 k..
endl done

I’m still not getting the same result with post-processing made by @grosgood. Everything is suppose to check out. eigen/meigen doesn’t seem to work out.

1 Like
i(#-1,pos_x,pos_y)=i(#-2,0,x);

These indices look off. Shouldn’t this be

i(#-2,pos_x,pos_y)=i(#-3,0,x);

In other words, the first image on the list has the unrolled pattern of the image (#-3 is leftmost - unroll. #-2 is the hamiltonian. #-1 drives the counters. No?

-meigen only returns eigenvalues. Looks like @David_Tschumperle was thinking eigenvalues/eigenvectors when he wrote the docstrings, but just pulled the eigenvalues through store. Doesn’t look hard to fix…

The indices aren’t actually off. What I did was inserted values into the hamiltonian image directly from the creation of a new image. #3 indice don’t exist until the creation of image is finished. Another good example of this technique is rep_pfrac.

1 Like

Got it. counter controller doesn’t really exist as the last item on the stack. OK…

@Reptorian
Looks like we make the same kind of diagonals on quantum_torus_small.png @ N=30. They diff to zero. I like your diagonalizer the best. I think we can check off the diag() implementation on the todo. Got to work a bit on @David_Tschumperle -meigen to pull the eigen_vectors from the math expression environment through to the command line. Or use the math expression parser with -exp{math_lib} and pull from meig(A,m,n) via the math expression’s store().
Don’t know when I can pick that up; it’s up for grabs. Hopefully can pick it up toward the end of the week. Stay tuned… Almost home…

gosgood@bertha ~/git_repositories/SchrodingerWellPython $ gmic osg_hamiltonian.cimg rep_hamiltonian.cimg -sub[-2,-1][gmic]-0./ Start G'MIC interpreter.
[gmic]-0./ Input file 'osg_hamiltonian.cimg' at position 0 (1 image 900x900x1x1).
[gmic]-1./ Input file 'rep_hamiltonian.cimg' at position 1 (1 image 900x900x1x1).
[gmic]-2./ Subtract images [0,1].
[gmic]-1./ Display image [0] = 'osg_hamiltonian.cimg'.
[0] = 'osg_hamiltonian.cimg':
  size = (900,900,1,1) [3164 Kio of floats].
  data = (0,0,0,0,0,0,0,0,0,0,0,0,(...),0,0,0,0,0,0,0,0,0,0,0,0).
  min = 0, max = 0, mean = 0, std = 0, coords_min = (0,0,0,0), coords_max = (0,0,0,0).
[gmic]-1./ End G'MIC interpreter.

@grosgood Have you been able to solve it?

1 Like

Still a WIP.
@David_Tschumperle brings forth eigenvalues only on command line -meigen and its underlying math expression equivalent meig(A,m,n). As I was rummaging through his implementation in ,gmic_stdlib.gmic, became increasingly disgusted with myself on the “plugging the numbers, turning the crank” approach. Don’t like it when I’m finding myself trying to find eigenvalues and eigenvectors of giganormous matrices. Feels like the dumb näif approach, numerically speaking. Nice when you have a workstation with 128GB RAM, What about people with 8GB laptops? So I took a step back to ask myself 'What am I doing?"

I believe what I’m doing is depicted here, solving for the ψi (the wave functions) in this “Shrodinger’s Equation” for energy levels, where the Hamiltonian operator (Ho) is in play. The solution is a numerical one (not analytic), so that the solution looks like a square data set of particle probabilities p at discrete <x>,<y> locations. This being quantum mechanics, Shrodinger’s Equation is true for only discrete energy levels. Finding (some) of those energy levels involves extracting eigenvectors (a.k.a eigenfunctions), the ψi, and eigenvalues, the Ei. These ψi are the basis of the pretty pictures.

I don’t understand the analytic justification for the mechanics behind this construction of Ho, unrolling the well (seed) image into off-diagonal coefficients, the scaling and spacing of the off-diagonals and the main. I want to grasp that basis - connect those dots - and, from that, grasp if there is any way better than this näive “follow-your-nose” approach that we seem to be doing. So I’ve made progress, I think, in coming to know what I don’t know. `Tis the negative that shapes the positive, and all that. I feel halfway up to writing my own version of -meigen that solves for the ψi in a not-so-memory sprawling way. Another weekend, maybe.

1 Like

Notes for me to come back to…
Useful search terms for Google: schrodinger’s energy equation numerical solution
From Boston University. Pay particular attention to Section 4, techniques in construction of Ho : Numerical Solutions of the Schrödinger Equation, Dr Anders W. Sandvik. Steep reading for one rusty in quantum mechanics (me! me!), but tractable, given enough cups of coffee…

1 Like