Presentation

This subject is issued from a question posted on OpenClassRoom. The objective was to be able to explore how eigenvalues and eigenvector are impacted by randomness in Tridiagonal Matrices. This is only the beginning of an algorithm to understand how Anderson's Localization works.

In [1]:
from matplotlib import pyplot as plt # Import pyplot as plt from the library matplotlib
import scipy
import numpy as np # Import the library numpy under the name np
In [2]:
n = 200 # Matrice size
I0 = 1 # Value of the main diagonal
I1 = 0.1 # Value of adjacent diagonal
y = [1, 198] # Vectors to be plotted

Now let's create the main Tridiagonal Matrix A and get eigenvalues and eignevectors

In [3]:
A = np.eye(n, n, k=-1)*I1 + np.eye(n, n)*I0 + np.eye(n, n, k=1)*I1  # Tridiagonal matrix without disorder
print(A)
[[ 1.   0.1  0.  ...,  0.   0.   0. ]
 [ 0.1  1.   0.1 ...,  0.   0.   0. ]
 [ 0.   0.1  1.  ...,  0.   0.   0. ]
 ..., 
 [ 0.   0.   0.  ...,  1.   0.1  0. ]
 [ 0.   0.   0.  ...,  0.1  1.   0.1]
 [ 0.   0.   0.  ...,  0.   0.1  1. ]]
In [4]:
eigenvaluesA, eigenvectorsA = np.linalg.eigh(A) # eigh as we have a symmetrical matrix
print(eigenvaluesA)
[ 0.80002443  0.80009771  0.80021982  0.80039074  0.80061042  0.8008788
  0.80119583  0.80156143  0.8019755   0.80243794  0.80294865  0.80350749
  0.80411434  0.80476903  0.80547142  0.80622133  0.80701857  0.80786296
  0.80875429  0.80969233  0.81067686  0.81170765  0.81278443  0.81390694
  0.81507491  0.81628806  0.81754609  0.81884869  0.82019554  0.82158631
  0.82302067  0.82449826  0.82601873  0.82758169  0.82918678  0.83083359
  0.83252173  0.83425078  0.83602032  0.83782991  0.83967913  0.84156751
  0.84349459  0.8454599   0.84746296  0.84950329  0.85158038  0.85369373
  0.85584282  0.85802713  0.86024611  0.86249924  0.86478596  0.86710571
  0.86945792  0.87184202  0.87425743  0.87670355  0.8791798   0.88168556
  0.88422022  0.88678317  0.88937377  0.8919914   0.89463541  0.89730516
  0.9         0.90271927  0.9054623   0.90822842  0.91101697  0.91382725
  0.91665858  0.91951028  0.92238163  0.92527194  0.92818052  0.93110663
  0.93404957  0.93700863  0.93998307  0.94297218  0.94597521  0.94899145
  0.95202014  0.95506055  0.95811195  0.96117357  0.96424468  0.96732453
  0.97041235  0.97350741  0.97660894  0.97971618  0.98282837  0.98594476
  0.98906459  0.99218708  0.99531149  0.99843703  1.00156297  1.00468851
  1.00781292  1.01093541  1.01405524  1.01717163  1.02028382  1.02339106
  1.02649259  1.02958765  1.03267547  1.03575532  1.03882643  1.04188805
  1.04493945  1.04797986  1.05100855  1.05402479  1.05702782  1.06001693
  1.06299137  1.06595043  1.06889337  1.07181948  1.07472806  1.07761837
  1.08048972  1.08334142  1.08617275  1.08898303  1.09177158  1.0945377
  1.09728073  1.1         1.10269484  1.10536459  1.1080086   1.11062623
  1.11321683  1.11577978  1.11831444  1.1208202   1.12329645  1.12574257
  1.12815798  1.13054208  1.13289429  1.13521404  1.13750076  1.13975389
  1.14197287  1.14415718  1.14630627  1.14841962  1.15049671  1.15253704
  1.1545401   1.15650541  1.15843249  1.16032087  1.16217009  1.16397968
  1.16574922  1.16747827  1.16916641  1.17081322  1.17241831  1.17398127
  1.17550174  1.17697933  1.17841369  1.17980446  1.18115131  1.18245391
  1.18371194  1.18492509  1.18609306  1.18721557  1.18829235  1.18932314
  1.19030767  1.19124571  1.19213704  1.19298143  1.19377867  1.19452858
  1.19523097  1.19588566  1.19649251  1.19705135  1.19756206  1.1980245
  1.19843857  1.19880417  1.1991212   1.19938958  1.19960926  1.19978018
  1.19990229  1.19997557]

Now, we have to add some noises to the matrix A.

In [5]:
h = 0.01 # Coefficient setting the maximum randomness
I2 = 1 # 0 if you want to keep the main diagonal clear, else 1
I3 = 1 # 0 if you want to keep the second and third diagonal clear, else 1
In [6]:
noise = np.random.rand(n, n) * h 
tridiag = np.eye(n, n, k=-1)*I3 + np.eye(n, n)*I2 + np.eye(n, n, k=1)*I3  # Create a tridiagonal matrix only to take noise on diagonals
w = noise * tridiag
print(w)
[[  7.98890581e-03   5.15813897e-03   0.00000000e+00 ...,   0.00000000e+00
    0.00000000e+00   0.00000000e+00]
 [  2.42807632e-03   4.14745354e-03   3.60562689e-03 ...,   0.00000000e+00
    0.00000000e+00   0.00000000e+00]
 [  0.00000000e+00   9.00083583e-03   4.07312172e-03 ...,   0.00000000e+00
    0.00000000e+00   0.00000000e+00]
 ..., 
 [  0.00000000e+00   0.00000000e+00   0.00000000e+00 ...,   8.32444051e-03
    4.31909450e-03   0.00000000e+00]
 [  0.00000000e+00   0.00000000e+00   0.00000000e+00 ...,   2.90376738e-03
    4.53599732e-04   7.65350003e-03]
 [  0.00000000e+00   0.00000000e+00   0.00000000e+00 ...,   0.00000000e+00
    9.34827426e-05   6.18288194e-03]]
In [7]:
B = A + w # Add the tridiagonal noise matrix and the ordered matrix together
print(B)
[[ 1.00798891  0.10515814  0.         ...,  0.          0.          0.        ]
 [ 0.10242808  1.00414745  0.10360563 ...,  0.          0.          0.        ]
 [ 0.          0.10900084  1.00407312 ...,  0.          0.          0.        ]
 ..., 
 [ 0.          0.          0.         ...,  1.00832444  0.10431909  0.        ]
 [ 0.          0.          0.         ...,  0.10290377  1.0004536
   0.1076535 ]
 [ 0.          0.          0.         ...,  0.          0.10009348
   1.00618288]]
In [8]:
eigenvaluesB, eigenvectorsB = np.linalg.eig(B) # eig used as the matrix is no symmetrical anymore
print(eigenvaluesB)
[ 1.2174268   1.21661971  1.21535938  1.21611825  1.21435337  1.21337677
  1.21350452  1.21303644  1.21285817  1.2123028   1.21168217  1.21095839
  1.2105309   1.21004637  1.20935219  1.20844395  1.20767751  1.20652035
  1.20573026  1.2049651   1.20326516  1.19365471  1.19473803  1.20211213
  1.20113333  1.20007677  1.19858534  1.19772295  1.19630752  1.19201997
  1.19085558  1.18899561  1.18438436  1.18564374  1.1874686   1.16692771
  1.1692246   1.18247587  1.18084904  1.17915506  1.17141739  1.17318384
  1.17503838  1.17742239  1.16465948  1.15848022  1.16266858  1.16088929
  1.15642188  1.15383938  1.15139658  1.14867735  1.14651325  1.14438517
  1.14213593  1.13950499  1.13718732  1.13452901  1.11231861  1.13152256
  1.11567227  1.12898751  1.12656307  1.11815321  1.12362688  1.12143146
  1.10695162  1.10980148  1.08901803  1.09286346  1.09554701  1.09837403
  1.10398194  1.10133845  1.08629165  1.08346297  1.08001573  1.07696006
  1.07428968  1.07139889  1.06803783  1.04895411  1.06505162  1.0518616
  1.05536858  1.06148865  1.05858326  1.04596794  1.0428211   1.02913716
  1.03259956  1.03576525  1.03944147  1.02629985  1.02315507  1.01983487
  1.01356564  1.01023504  1.00668658  1.0037245   1.0168086   0.99996417
  0.99661807  0.99018157  0.98733885  0.96419377  0.96727286  0.98327809
  0.98045206  0.97693306  0.97408288  0.9932613   0.9612058   0.97086144
  0.9575231   0.954534    0.95166409  0.94184771  0.94527716  0.94799502
  0.9390107   0.9361058   0.93326427  0.91181916  0.91471779  0.91731395
  0.92337101  0.92088839  0.92695732  0.79244269  0.92997042  0.90848915
  0.90294948  0.89972017  0.8973004   0.89447343  0.8917818   0.87540594
  0.87780856  0.88878903  0.8811505   0.8832998   0.88644086  0.90590269
  0.79309104  0.87038342  0.87279912  0.79513355  0.79549721  0.79586311
  0.79599137  0.86301734  0.86045735  0.79618216  0.85833438  0.86528248
  0.85580946  0.86790198  0.84735826  0.84926168  0.85351536  0.79857828
  0.79942277  0.79808816  0.797317    0.7971408   0.84533594  0.80002572
  0.80500161  0.80383052  0.80309392  0.80276147  0.80125853  0.85157471
  0.84294992  0.84067788  0.83259636  0.83507325  0.83069803  0.82894365
  0.81058715  0.82777722  0.838904    0.83680948  0.82236326  0.82397484
  0.81793177  0.81894926  0.81670789  0.81207036  0.8133911   0.80986969
  0.8076077   0.80833613  0.81511308  0.79677282  0.82534313  0.82054092
  0.80684349  0.80063397]

Let's see a how eigenvalues changed

In [9]:
plt.plot(eigenvaluesA, label = "Eigenvalues of A")
plt.plot(eigenvaluesB, label = "Eigenvalues of B")
plt.legend()
plt.show()

We can see that B's eigenvalues are not sorted anymore. We can now sort it. The idea is to be able to plot the n-th eigenvector of A and B. If they are not sorted, we won't compare the same vector.

In [10]:
idx = eigenvaluesB.argsort()[::1]   # let take the order
eigenvaluesB = eigenvaluesB[idx]     # sort eigenvalues accordingly
eigenvectorsB = eigenvectorsB[:,idx] # and sort all column of eigenvectors of B
In [11]:
plt.plot(eigenvaluesA, label = "Eigenvalues of A")
plt.plot(eigenvaluesB, label = "Eigenvalues of B")
plt.legend()
plt.show()

Now we have both matrices sorted in ascending order. We can plot all vectors requested in y.

In [12]:
for position in y:
    plt.plot(eigenvectorsA[:,position], label="{}-th eigenvector of A".format(position+1))
    plt.plot(eigenvectorsB[:,position], label="{}-th eigenvector of B".format(position+1))

plt.title("Graph of the matrix' eigenvectors as function of their elements' position")
plt.xlabel("Element position")
plt.ylabel("Arbitrary unit")
plt.yticks([]) # Removes y axis' values since they are arbitrary
plt.legend(loc="upper left", bbox_to_anchor=(1,1)) # Legend is outside the plot

plt.show()

Now we can see that the randomness distorted eigenvectors (green vs red or blue vs orange). On the original matrice A, all vector are sinusoid with a frequency increasing with the eigenvalue. For the randomized matrix, eigenvectors shape is not a sinusoid anymore but a deformed one. Also the n-a th vector in not "contained" anymore inside the a-th vector. This phenomenon i is used in quantum theory to "estimate" particles positions. Unfortunately, my level in Physic is not good enought to go further on this subject :'(