'Inverse transformation of a conditional masked autoregressive flow in tensorflow probability
The following is a normalizing flow model of the log conditional density of x_
given c_
.
import tensorflow as tf
import tensorflow_probability as tfp
tfk = tf.keras
tfkl = tf.keras.layers
tfpl = tfp.layers
tfd = tfp.distributions
tfb = tfp.bijectors
n = 100
dims = 10
regNet1 = tfb.AutoregressiveNetwork(
params=2,
hidden_units=[64],
event_shape=(dims,),
conditional=True,
conditional_event_shape=(10,),
activation="relu",
dtype=np.float32,
)
maf1 = tfb.MaskedAutoregressiveFlow(shift_and_log_scale_fn=regNet1, name="maf1")
maf_mod = tfd.TransformedDistribution(
distribution=tfd.MultivariateNormalDiag(
loc=np.zeros(dims).astype(dtype=np.float32),
scale_diag=np.ones(dims).astype(dtype=np.float32),
),
bijector=maf1,
)
# Construct and fit model
x_ = tfkl.Input(shape=dims, dtype=tf.float32)
c_ = tfkl.Input(shape=dims, dtype=tf.float32)
log_prob_ = maf_mod.log_prob(
x_,
bijector_kwargs={'conditional_input': c_}
)
model_log_prob = tfk.Model([x_, c_], log_prob_)
What is the code/syntax to get the inverse of x_
given c_
. I.e., I want the draws (from the baseline distribution, in this example -- the multivariate Normal) that map through the bijector (regNet1
) to x_
given c_
.
My aim is to build a model of the form:
model_inverse = tfk.Model([x_, c_], inv_x_)
where inv_x_
are the draws that correspond to x_
and c_
.
I would imagine that something like inv_x_ = regNet1.inverse(x_, c_)
should work but I am unable to figure out the correct syntax and use.
Solution 1:[1]
def maf1inverse(x_):
params = regNet1(x_, conditional_input = c_)
return (x_ - params[:,:,0]) / tf.exp(params[:,:,1])
inv_x_ = maf1inverse(x_)
model_inverse = tfk.Model([x_, c_], inv_x_)
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 |