'Derivates from a class instance in TF1
I am using the Physics Informed Neural Networks (PINNs) methodology to solve non-linear PDEs in high dimension. Specifically, I am using this class https://github.com/maziarraissi/PINNs/blob/master/appendix/continuous_time_inference%20(Burgers)/Burgers.py
where the function def net_f(self, x,t):
is modified to include more variables than just x
and t
i.e. def net_f(self, w, z, v, t):
I have two PDEs and a policy function (each an instance of the PhysicsInformedNN
class) and at some point I have a function which combines the approximation from def net_f(self, w,z,v,t):
let's say in the following way y = PDE1(w,z,v,t) + PDE2(w,z,v,t) + policy(w,z,v,t)
. I want to take derivatives of this function with respect to w,z
and v
. However, I can't figure out how to do that in TensorFlow 1.15.2
. This is more or less trivial to do in TF2
but I want to stick to TensorFlow 1.15.2
for several reasons.
Basically, the problem boils down to this: from an instantiated model
model = PhysicsInformedNN(X_u_train, u_train, X_f_train, layers, lb, ub, nu)
take a derivative of model.net_u(x,t)
with respect to x
or t
. Assuming, either the model is trained or not trained. If I can do that then I can figure out how to take derivatives of function y
above w.r.t. the variables in each PDE and the policy.
Note: This can be done fully analytically, i.e. I can hard code the formulas for the derivatives of y
using values from model.predict()
(which would be numpy
arrays). I can check that the derivative formulas are correct with TF2
I just want to use automatic differentiation to do this (since the formulas are complicated and become very cumbersome as the dimension of the PDEs increases).
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|