# Presentation¶

A double pendulum is a dynamic system that exhibits rich dynamic behavior with a strong sensitivity to initial conditions. It's part of what we can call a Chaotic system. It is composed of a pendulum attached at the extremity of another pendulum.

In this notebook, we will try to create a Model able to simulate the motion of this system. The notation used after will be based on the following image :

The equation of motions for this system can be found on wikipedia. In our case, model will be solved using scipy.odeint. A part of the code also comes from Matplotlib examples

# Imports and functions¶

In [6]:
import numpy as np

import scipy.integrate as integrate

import matplotlib.pyplot as plt
import matplotlib.animation as animation
from IPython.display import HTML

In [3]:
G = 9.8  # acceleration due to gravity, in m/s^2
L1 = 1.0  # length of pendulum 1 in m
L2 = 0.5  # length of pendulum 2 in m
M1 = 1.0  # mass of pendulum 1 in kg
M2 = 0.3  # mass of pendulum 2 in kg

dt = 0.05  # simulation time in s
timestep = 100

th1_max = np.radians(100)  # max angle of the first arm in radians
w1_max = np.radians(100)   # max initial angular speed of the first arm in radians
th2_max = np.radians(360)  # max angle of the second arm in radians
w2_max = np.radians(360)   # max initial angular speed of the second arm in radians

t = np.arange(dt, (timestep+1)*dt, dt)  # create a time array  of n * dt steps

In [11]:
def derivs(state, t):

dydx = np.zeros_like(state)
dydx[0] = state[1]

del_ = state[2] - state[0]
den1 = (M1 + M2)*L1 - M2*L1*cos(del_)*cos(del_)
dydx[1] = (M2*L1*state[1]*state[1]*sin(del_)*cos(del_) +
M2*G*sin(state[2])*cos(del_) +
M2*L2*state[3]*state[3]*sin(del_) -
(M1 + M2)*G*sin(state[0]))/den1

dydx[2] = state[3]

den2 = (L2/L1)*den1
dydx[3] = (-M2*L2*state[3]*state[3]*sin(del_)*cos(del_) +
(M1 + M2)*G*sin(state[0])*cos(del_) -
(M1 + M2)*L1*state[1]*state[1]*sin(del_) -
(M1 + M2)*G*sin(state[2]))/den2

return dydx

def init():
line.set_data([], [])
line2.set_data([], [])
time_text.set_text('')
return line, line2, time_text

def animate(i):
thisx = [0, x1[i], x2[i]]
thisy = [0, y1[i], y2[i]]

thisx2 = [0, x3[i], x4[i]]
thisy2 = [0, y3[i], y4[i]]

line.set_data(thisx, thisy)
line2.set_data(thisx2, thisy2)
time_text.set_text(time_template % (i*dt))
return line, time_text

def generate_state():
return np.random.uniform(-1, 1, 4) * np.array([th1_max, w1_max, th2_max, w2_max])

def simulate(state):
y = integrate.odeint(derivs, state, t)
return y

def preprocess_result(X):
h, w = X.shape
new_X = np.zeros(shape=(h, w+2))
new_X[:, 0] = np.cos(X[:, 0])
new_X[:, 1] = np.sin(X[:, 0])
new_X[:, 2] = X[:, 1]/w1_max
new_X[:, 3] = np.cos(X[:, 2])
new_X[:, 4] = np.sin(X[:, 2])
new_X[:, 5] = X[:, 3]/w2_max
return new_X


# Simulation¶

In order to see the chaotic behavior of the pendulum, let's generate an initial state and a second one with only 5% change on the initial angle on the 2nd arm (5 % is quite a lot but the impact is usually important avec more than 5 seconds).

An initial state will have the following shape:

$$state = [\theta_1, \omega_1, \theta_2, \omega_2]$$

If we keep the state as is, we have discontinuities for the model when angle goes from 359Â° to 0Â° or the inverse. We will pre-process this state with the following change to have a continue change and also to normalize the angle velocity.

$$state = [cos(\theta_1), sin(\theta_1), \frac{\omega_1}{\omega_1max}, cos(\theta_2), sin(\theta_2), \frac{\omega_2}{\omega_2max}]$$

In [18]:
plt.close('all')

state = generate_state()                          # return an initial state
state2 = state * np.array([1.0, 1.0, 1.05, 1.0])  # return a second state with 5% difference

simulation1 = simulate(state)                     # return an array of the next timestep + 1 values
simulation1 = preprocess_result(simulation1)      # pre-processing as previously explained (not really required here)

simulation2 = simulate(state2)
simulation2 = preprocess_result(simulation2)

# rendering 1
x1 = L1*simulation1[:, 1]
y1 = -L1*simulation1[:, 0]

x2 = L2*simulation1[:, 4] + x1
y2 = -L2*simulation1[:, 3] + y1

# rendering 2
x3 = L1*simulation2[:, 1]
y3 = -L1*simulation2[:, 0]

x4 = L2*simulation2[:, 4] + x3
y4 = -L2*simulation2[:, 3] + y3

# creation of the animation
fig = plt.figure()
ax = fig.add_subplot(111, autoscale_on=False, xlim=(-2, 2), ylim=(-2, 2))
ax.grid()

line, = ax.plot([], [], 'o-', lw=2)
line2, = ax.plot([], [], 'o-', lw=2)
time_template = 'time = %.1fs'
time_text = ax.text(0.05, 0.9, '', transform=ax.transAxes)

anim = animation.FuncAnimation(fig, animate, np.arange(1, len(y)),
interval=1000*dt, blit=True, init_func=init)

# anim.save('double_pendulum.mp4', fps=15)
HTML(anim.to_html5_video())

Out[18]:

# Create Dataset¶

Now, we have a way to simulate the model on T timesteps. To create the dataset, we will produce N simulations. The N first result will go to our input matrix and the N last one will go to target. As a result we will just have a shift by 1 timestep (that's why we generate T+1 steps per simulation)

In [7]:
sample_size = 50000

X = np.zeros(shape=(sample_size, timestep, 6))
Y = np.zeros(shape=(sample_size, timestep, 6))

for i in range(sample_size):
if i % 1000 == 0:
print("Generating simulation" ,i)
init_state = generate_state()
y = simulate(init_state)
y = preprocess_result(y)
X[i] = y[:-1]
Y[i] = y[1:]

np.save("X.npy", X)
np.save("y.npy", Y)

Generating simulation 0
Generating simulation 1000
Generating simulation 2000
Generating simulation 3000
Generating simulation 4000
Generating simulation 5000
Generating simulation 6000
Generating simulation 7000
Generating simulation 8000
Generating simulation 9000
Generating simulation 10000
Generating simulation 11000
Generating simulation 12000
Generating simulation 13000
Generating simulation 14000
Generating simulation 15000
Generating simulation 16000
Generating simulation 17000
Generating simulation 18000
Generating simulation 19000
Generating simulation 20000
Generating simulation 21000
Generating simulation 22000
Generating simulation 23000
Generating simulation 24000
Generating simulation 25000
Generating simulation 26000
Generating simulation 27000
Generating simulation 28000
Generating simulation 29000
Generating simulation 30000
Generating simulation 31000
Generating simulation 32000
Generating simulation 33000
Generating simulation 34000
Generating simulation 35000
Generating simulation 36000
Generating simulation 37000
Generating simulation 38000
Generating simulation 39000
Generating simulation 40000
Generating simulation 41000
Generating simulation 42000
Generating simulation 43000
Generating simulation 44000
Generating simulation 45000
Generating simulation 46000
Generating simulation 47000
Generating simulation 48000
Generating simulation 49000


# Create Model¶

Now we have our dataset, we can create a model. For this phase we will use a Recurrent Neural Network architecture as we have a timeserie dataset (of shape (N = 50000, T = 100, I = 6)). Several architectures have been tried but we will present only the one having the best result. The architecture is only composed of 3 LSTM of 32 dimensions. At the end, a Dense layer is added in order to have 6 outputs. The optimiser chosen is Nadam and the loss is Mean Squared Errors.

In [20]:
from keras.models import Sequential
from keras.layers import Dense, CuDNNLSTM
from keras.models import load_model

Using TensorFlow backend.

In [21]:
X = np.load("X.npy")

In [22]:
model = Sequential()
model.add(CuDNNLSTM(32, return_sequences=True, input_shape=(timestep, 6)))

model.summary()

_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
cu_dnnlstm_1 (CuDNNLSTM)     (None, 100, 32)           5120
_________________________________________________________________
cu_dnnlstm_2 (CuDNNLSTM)     (None, 100, 32)           8448
_________________________________________________________________
cu_dnnlstm_3 (CuDNNLSTM)     (None, 100, 32)           8448
_________________________________________________________________
dense_1 (Dense)              (None, 100, 6)            198
=================================================================
Total params: 22,214
Trainable params: 22,214
Non-trainable params: 0
_________________________________________________________________

In [23]:
model.compile(loss='mean_squared_error', optimizer='Nadam')

In [24]:
model.fit(X, Y,
epochs=100,
batch_size=256,
validation_split=0.2,
verbose=2
)

Train on 40000 samples, validate on 10000 samples
Epoch 1/100
- 5s - loss: 0.1349 - val_loss: 0.0266
Epoch 2/100
- 3s - loss: 0.0180 - val_loss: 0.0137
Epoch 3/100
- 3s - loss: 0.0106 - val_loss: 0.0077
Epoch 4/100
- 3s - loss: 0.0073 - val_loss: 0.0063
Epoch 5/100
- 3s - loss: 0.0054 - val_loss: 0.0049
Epoch 6/100
- 3s - loss: 0.0044 - val_loss: 0.0034
Epoch 7/100
- 3s - loss: 0.0035 - val_loss: 0.0036
Epoch 8/100
- 3s - loss: 0.0027 - val_loss: 0.0041
Epoch 9/100
- 3s - loss: 0.0023 - val_loss: 0.0017
Epoch 10/100
- 3s - loss: 0.0017 - val_loss: 0.0015
Epoch 11/100
- 3s - loss: 0.0015 - val_loss: 0.0011
Epoch 12/100
- 3s - loss: 0.0012 - val_loss: 0.0012
Epoch 13/100
- 3s - loss: 0.0011 - val_loss: 9.2272e-04
Epoch 14/100
- 3s - loss: 9.1561e-04 - val_loss: 6.4833e-04
Epoch 15/100
- 3s - loss: 7.8691e-04 - val_loss: 0.0013
Epoch 16/100
- 3s - loss: 6.7656e-04 - val_loss: 5.4061e-04
Epoch 17/100
- 3s - loss: 6.1173e-04 - val_loss: 6.7094e-04
Epoch 18/100
- 3s - loss: 6.5135e-04 - val_loss: 5.2838e-04
Epoch 19/100
- 3s - loss: 4.2877e-04 - val_loss: 8.4381e-04
Epoch 20/100
- 3s - loss: 5.1166e-04 - val_loss: 4.3474e-04
Epoch 21/100
- 3s - loss: 5.2834e-04 - val_loss: 3.0734e-04
Epoch 22/100
- 3s - loss: 4.4124e-04 - val_loss: 2.9163e-04
Epoch 23/100
- 3s - loss: 4.1048e-04 - val_loss: 4.5922e-04
Epoch 24/100
- 3s - loss: 3.7029e-04 - val_loss: 2.6556e-04
Epoch 25/100
- 3s - loss: 4.3704e-04 - val_loss: 2.7693e-04
Epoch 26/100
- 3s - loss: 3.5545e-04 - val_loss: 2.2998e-04
Epoch 27/100
- 3s - loss: 3.2427e-04 - val_loss: 5.8565e-04
Epoch 28/100
- 3s - loss: 3.5649e-04 - val_loss: 2.0873e-04
Epoch 29/100
- 3s - loss: 3.3278e-04 - val_loss: 4.9966e-04
Epoch 30/100
- 3s - loss: 3.1314e-04 - val_loss: 2.0168e-04
Epoch 31/100
- 3s - loss: 2.9363e-04 - val_loss: 2.8941e-04
Epoch 32/100
- 3s - loss: 2.7780e-04 - val_loss: 2.3662e-04
Epoch 33/100
- 3s - loss: 2.6503e-04 - val_loss: 2.5704e-04
Epoch 34/100
- 3s - loss: 2.3528e-04 - val_loss: 3.9012e-04
Epoch 35/100
- 3s - loss: 3.2932e-04 - val_loss: 1.7432e-04
Epoch 36/100
- 3s - loss: 1.9786e-04 - val_loss: 4.5974e-04
Epoch 37/100
- 3s - loss: 2.7375e-04 - val_loss: 1.1537e-04
Epoch 38/100
- 3s - loss: 1.8244e-04 - val_loss: 1.6556e-04
Epoch 39/100
- 3s - loss: 2.4531e-04 - val_loss: 1.8392e-04
Epoch 40/100
- 3s - loss: 2.0647e-04 - val_loss: 9.2411e-05
Epoch 41/100
- 3s - loss: 1.8136e-04 - val_loss: 3.6550e-04
Epoch 42/100
- 3s - loss: 1.9776e-04 - val_loss: 1.3571e-04
Epoch 43/100
- 3s - loss: 2.0210e-04 - val_loss: 2.5248e-04
Epoch 44/100
- 3s - loss: 1.9978e-04 - val_loss: 2.9632e-04
Epoch 45/100
- 3s - loss: 1.7776e-04 - val_loss: 9.3376e-05
Epoch 46/100
- 3s - loss: 1.5740e-04 - val_loss: 9.7588e-05
Epoch 47/100
- 3s - loss: 1.7448e-04 - val_loss: 2.2662e-04
Epoch 48/100
- 3s - loss: 1.6191e-04 - val_loss: 1.3243e-04
Epoch 49/100
- 3s - loss: 1.6566e-04 - val_loss: 1.2319e-04
Epoch 50/100
- 3s - loss: 1.5611e-04 - val_loss: 2.6334e-04
Epoch 51/100
- 3s - loss: 1.4664e-04 - val_loss: 1.2465e-04
Epoch 52/100
- 3s - loss: 1.3393e-04 - val_loss: 1.2131e-04
Epoch 53/100
- 3s - loss: 1.4833e-04 - val_loss: 9.5243e-05
Epoch 54/100
- 3s - loss: 1.3600e-04 - val_loss: 9.7952e-05
Epoch 55/100
- 3s - loss: 1.4016e-04 - val_loss: 9.4055e-05
Epoch 56/100
- 3s - loss: 1.1665e-04 - val_loss: 1.6075e-04
Epoch 57/100
- 3s - loss: 1.2969e-04 - val_loss: 1.0462e-04
Epoch 58/100
- 3s - loss: 1.3138e-04 - val_loss: 7.2009e-05
Epoch 59/100
- 3s - loss: 1.2064e-04 - val_loss: 2.9637e-04
Epoch 60/100
- 3s - loss: 1.2501e-04 - val_loss: 6.6206e-05
Epoch 61/100
- 3s - loss: 9.8048e-05 - val_loss: 1.0785e-04
Epoch 62/100
- 3s - loss: 1.1120e-04 - val_loss: 8.4042e-05
Epoch 63/100
- 3s - loss: 1.1476e-04 - val_loss: 7.0615e-05
Epoch 64/100
- 3s - loss: 1.0614e-04 - val_loss: 1.7009e-04
Epoch 65/100
- 3s - loss: 9.2184e-05 - val_loss: 6.0004e-05
Epoch 66/100
- 3s - loss: 1.1249e-04 - val_loss: 1.4001e-04
Epoch 67/100
- 3s - loss: 9.0612e-05 - val_loss: 9.8637e-05
Epoch 68/100
- 3s - loss: 1.0350e-04 - val_loss: 6.4857e-05
Epoch 69/100
- 3s - loss: 8.9202e-05 - val_loss: 1.6669e-04
Epoch 70/100
- 3s - loss: 9.3076e-05 - val_loss: 5.1285e-05
Epoch 71/100
- 3s - loss: 1.0720e-04 - val_loss: 5.7541e-05
Epoch 72/100
- 3s - loss: 7.6729e-05 - val_loss: 4.4247e-05
Epoch 73/100
- 3s - loss: 8.2168e-05 - val_loss: 9.0115e-05
Epoch 74/100
- 3s - loss: 9.6459e-05 - val_loss: 5.9677e-05
Epoch 75/100
- 3s - loss: 7.8435e-05 - val_loss: 5.4716e-05
Epoch 76/100
- 3s - loss: 7.7910e-05 - val_loss: 3.4530e-05
Epoch 77/100
- 3s - loss: 9.1138e-05 - val_loss: 4.6748e-05
Epoch 78/100
- 3s - loss: 6.6861e-05 - val_loss: 6.2960e-05
Epoch 79/100
- 3s - loss: 7.7094e-05 - val_loss: 1.1187e-04
Epoch 80/100
- 3s - loss: 8.0809e-05 - val_loss: 4.7348e-05
Epoch 81/100
- 3s - loss: 6.1183e-05 - val_loss: 4.6797e-05
Epoch 82/100
- 3s - loss: 8.6871e-05 - val_loss: 7.2481e-05
Epoch 83/100
- 3s - loss: 7.6746e-05 - val_loss: 7.5480e-05
Epoch 84/100
- 3s - loss: 6.2521e-05 - val_loss: 9.9676e-05
Epoch 85/100
- 3s - loss: 7.0580e-05 - val_loss: 5.5815e-05
Epoch 86/100
- 3s - loss: 6.5195e-05 - val_loss: 7.0648e-05
Epoch 87/100
- 3s - loss: 6.9806e-05 - val_loss: 5.7247e-05
Epoch 88/100
- 3s - loss: 7.2144e-05 - val_loss: 7.8573e-05
Epoch 89/100
- 3s - loss: 6.3009e-05 - val_loss: 3.1655e-05
Epoch 90/100
- 3s - loss: 6.4259e-05 - val_loss: 6.2502e-05
Epoch 91/100
- 3s - loss: 6.4093e-05 - val_loss: 7.9534e-05
Epoch 92/100
- 3s - loss: 5.6594e-05 - val_loss: 8.5248e-05
Epoch 93/100
- 3s - loss: 5.6439e-05 - val_loss: 1.4559e-04
Epoch 94/100
- 3s - loss: 6.1553e-05 - val_loss: 4.4000e-05
Epoch 95/100
- 3s - loss: 5.9610e-05 - val_loss: 9.4669e-05
Epoch 96/100
- 3s - loss: 5.5768e-05 - val_loss: 1.7904e-05
Epoch 97/100
- 3s - loss: 6.0275e-05 - val_loss: 8.0726e-05
Epoch 98/100
- 3s - loss: 5.3985e-05 - val_loss: 4.9076e-05
Epoch 99/100
- 3s - loss: 6.1083e-05 - val_loss: 2.3233e-05
Epoch 100/100
- 3s - loss: 5.7713e-05 - val_loss: 5.0950e-05

Out[24]:
<keras.callbacks.History at 0x21fd68babe0>
In [26]:
model.save("model.h5")


# Generator¶

Now, we trained our model but we don't want to do a simulation everytime so we will change the architecture from Many-to-Many to One-to-Many.

In [27]:
newModel = Sequential()
stateful=True,
return_sequences=True,
batch_input_shape=(1, 1, 6),
))
stateful=True,
return_sequences=True,
))
stateful=True,
return_sequences=True,
))

try:
weight = model.get_weights()
except:
weight = model.get_weights()
newModel.set_weights(weight)

newModel.summary()

_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
cu_dnnlstm_4 (CuDNNLSTM)     (1, 1, 32)                5120
_________________________________________________________________
cu_dnnlstm_5 (CuDNNLSTM)     (1, 1, 32)                8448
_________________________________________________________________
cu_dnnlstm_6 (CuDNNLSTM)     (1, 1, 32)                8448
_________________________________________________________________
dense_2 (Dense)              (1, 1, 6)                 198
=================================================================
Total params: 22,214
Trainable params: 22,214
Non-trainable params: 0
_________________________________________________________________


Now we can create a initial state, do the simulation (with scipy) and a prediction (with our model). After we will animate both simulation to visualize the error.

In [45]:
plt.close('all')

newModel.reset_states()                # required to reset the cell state and hidden state

init_state = generate_state()          # new initial position
simulation = simulate(init_state)      # simulation with scipy
y = preprocess_result(simulation)      # pre-process the input with cos and sin

prediction = []
init_state = y[0, :]                   # reuse the preprocess_result of the initial state
state = np.expand_dims(init_state, 0)
state = np.expand_dims(state, 0)
for i in range(timestep+1):
state = newModel.predict(state)
prediction.append(state[0, 0, :])  # generation of T timesteps

prediction = np.array(prediction)

# rendering simulation
x1 = L1*y[:, 1]
y1 = -L1*y[:, 0]

x2 = L2*y[:, 4] + x1
y2 = -L2*y[:, 3] + y1

# rendering prediction
x3 = L1*prediction[:, 1]
y3 = -L1*prediction[:, 0]

x4 = L2*prediction[:, 4] + x3
y4 = -L2*prediction[:, 3] + y3

# Animation
fig = plt.figure()
ax = fig.add_subplot(111, autoscale_on=False, xlim=(-2, 2), ylim=(-2, 2))
ax.grid()

line, = ax.plot([], [], 'o-', lw=2)
line2, = ax.plot([], [], 'o-', lw=2)
time_template = 'time = %.1fs'
time_text = ax.text(0.05, 0.9, '', transform=ax.transAxes)

ani = animation.FuncAnimation(fig, animate_double, np.arange(1, len(y1)),
interval=1000*dt, blit=True, init_func=init_double)

# ani.save('double_pendulum_3.mp4', fps=15)
HTML(ani.to_html5_video())

Out[45]:

# Conclusion¶

As a result, we have a quite good simulation even if 100 timestep is important for an LSTM with only 50k datas for the training. In some cases, the simulation is quite different but the dynamic looks ok. This is a simple example of the power of an LSTM to simulate a "complex" system.