{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Workshop: Poisson Regression in TensorFlow\n",
"\n",
"### Monday, April 24\n",
"\n",
"Today's exercises will illustrate a bit about the basics of TensorFlow.\n",
"\n",
"Today, we'll build and train a simple model, extending what we saw in the lecture videos from the case of a simple linear model to the case of Poisson regression.\n",
"\n",
"## Review: Poisson regression\n",
"\n",
"Given predictor-response pairs $(X_i,Y_i)$ with $X_i \\in \\mathbb{R}^d$ and $Y_i \\in \\mathbb{R}$, Poisson regression models the response $Y_i$ as being distributed according to a Poisson with mean $\\lambda(X_i) = \\exp\\{ \\alpha + \\beta^T X_i\\}$.\n",
"That is, for $y=0,1,2,\\dots$,\n",
"$$\n",
"\\Pr[ Y_i=y \\mid X_i=x, \\alpha, \\beta] = \\frac{ e^{y(\\beta^T x + \\alpha)} e^{-\\exp\\{ \\beta^T x + \\alpha\\} } }{ y! }.\n",
"$$\n",
"\n",
"Our goal is to implement this model, and then use TF's built-in optimizers to maximize the likelihood of this model with respect to $\\alpha$ and $\\beta$."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Reminder: if you have not already installed TensorFlow on your local machine, do so with `pip install tensorflow`."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"2.8.0\n"
]
}
],
"source": [
"import tensorflow as tf\n",
"print(tf.__version__)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In today's exercises, we're going to focus on using the `keras.Module` class for implementing our models. This has some minor drawbacks when the time comes to upload our models to a server and do prediction, but it is a much simpler class to work with, so we'll use it for now.\n",
"\n",
"## Review: TF tensors and all that\n",
"\n",
"First things first, let's review the difference between constant tensors and variable tensors in TF.\n",
"\n",
"Here are a couple of illustrative constant tensors. Recall that constant tensors are immutable, in much the same way as Python tuples are."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"tf.Tensor(\n",
"[[1. 0. 0.]\n",
" [0. 1. 0.]\n",
" [0. 0. 1.]], shape=(3, 3), dtype=float32)\n"
]
}
],
"source": [
"identity = tf.constant([[1,0,0],[0,1,0],[0,0,1]], dtype=tf.float32)\n",
"print(identity)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"tf.Tensor(\n",
"[[ 1. 2. 3.]\n",
" [ 4. 5. 6.]\n",
" [ 7. 8. 9.]\n",
" [10. 11. 12.]], shape=(4, 3), dtype=float32)\n"
]
}
],
"source": [
"oneThruTwelve = tf.constant([[1,2,3],[4,5,6],[7,8,9],[10,11,12]], dtype=tf.float32)\n",
"print(oneThruTwelve)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If we try to change entries in these, we get an error."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"ename": "TypeError",
"evalue": "'tensorflow.python.framework.ops.EagerTensor' object does not support item assignment",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mTypeError\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0moneThruTwelve\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;36m3.1415\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m",
"\u001b[0;31mTypeError\u001b[0m: 'tensorflow.python.framework.ops.EagerTensor' object does not support item assignment"
]
}
],
"source": [
"oneThruTwelve[1,1] = 3.1415"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This is going to be a problem if we want to change the parameters of a model during optimization!\n",
"\n",
"Luckily, we have variable tensors, which do support item assignment, updating, etc."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n"
]
}
],
"source": [
"a = tf.Variable( [1,2,3], dtype=tf.float32)\n",
"print(a)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"ename": "TypeError",
"evalue": "'ResourceVariable' object does not support item assignment",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mTypeError\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0ma\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;36m10\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m",
"\u001b[0;31mTypeError\u001b[0m: 'ResourceVariable' object does not support item assignment"
]
}
],
"source": [
"a[1] = 10"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Hey! What the heck?! I thought we were allowed to do this!\n",
"\n",
"Well, it's a bit more complicated than that.\n",
"Because TF code is designed so that we can, in the end, build a function graph, we have to use its built-in commands for updating."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
""
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"a.assign([2,3,4])"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n"
]
}
],
"source": [
"print(a)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
""
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"a[1].assign(-1)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
""
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"a"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Recall from lecture the way we incorporated this into a simple linear model.\n",
"\n",
"Our first naive solution involved writing a function that takes a slope and intercept and applies that slope and intercept to a given observation $x$."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"def linear_prediction_pyfn(c,d,x):\n",
" return c*x + d\n",
"# Reminder: we have to turn a regular Python function into a TF function\n",
"# for inclusion in the function graph.\n",
"linear_model = tf.function(linear_prediction_pyfn)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
""
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"linear_model(1,2,3)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"With this in hand, we can apply a (one-dimensional) linear model to data by passing parameters, say, $W$ and $b$, and apply those "
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
""
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"W = tf.Variable([0.5], dtype=tf.float32)\n",
"b = tf.Variable([-1], dtype=tf.float32)\n",
"linear_model(W,b,[0,1,2,3,4])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Building a linear model\n",
"\n",
"Okay, so with all of that recap out of the way, here's our linear model (again, recall that we're using the `Module` class, not the `Model` class, for the sake of simplicity while we get our bearings."
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
""
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"class LinearModel(tf.Module):\n",
" def __init__(self, name=None):\n",
" super().__init__(name=name)\n",
" self.W = tf.Variable([0.5], dtype=tf.float32, name=\"slope\")\n",
" self.b = tf.Variable([-1.0], dtype=tf.float32, name=\"intercept\")\n",
" def __call__(self, x):\n",
" # Note: we could use our function linear_model here, if we wanted.\n",
" return self.W * x + self.b\n",
"\n",
"linear_model = LinearModel(name='linear')\n",
"\n",
"linear_model(tf.constant([5.0]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, when we have actual data, we want to be able to define a loss function for us to optimize.\n",
"For ordinary least squares, this loss function is the sum of squared residuals."
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [],
"source": [
"def loss(y_observed, y_predicted):\n",
" return tf.reduce_sum(tf.square(y_observed - y_predicted))"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"23.5"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"x = tf.constant( [1,2,3,4], dtype=tf.float32 )\n",
"y = tf.constant( [0,-1,-2,-3], dtype=tf.float32 )\n",
"loss( y, linear_model(x) ).numpy()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If we set $W$ and $b$ appopriately, we can drive this loss to zero."
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"0.0"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"linear_model.W.assign([-1])\n",
"linear_model.b.assign([1])\n",
"loss( linear_model(x), y ).numpy()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, to train our model on real data, we will need to compute a gradient with respect to this loss and update our parameters accordingly.\n",
"\n",
"The function below computes a single gradient step and update."
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [],
"source": [
"def train(model, x, y, learning_rate):\n",
" with tf.GradientTape() as t:\n",
" current_loss = loss(y, model(x))\n",
" \n",
" (dW, db) = t.gradient(current_loss, [model.W, model.b])\n",
" \n",
" model.W.assign_sub( learning_rate*dW )\n",
" model.b.assign_sub( learning_rate*db )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, we need to repeat this a few times.\n",
"Of course, in practice, the problem of how to choose a learning rate, when to stop taking gradient steps, etc. is a problem unto itself, but we will mostly leave that for your other courses.\n",
"\n",
"On Wednesday, though, we will discuss the details of gradient descent and how TF magically computes any gradient for you using the `gradient` method.\n",
"For now, we're going to just treat it as magic."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Implementing Poisson regression\n",
"\n",
"Okay, that's enough review. Let's do a thing!\n",
"\n",
"Recall from above that under Poisson regression, we model the response $Y_i$ as being Poisson-distributed with a parameter $\\lambda = \\lambda(X_i)$ that depends on $X_i$ via parameters $\\beta \\in \\mathbb{R}^d$ and $\\alpha \\in \\mathbb{R}$.\n",
"\n",
"Specifically, $\\lambda(X_i) = e^{\\beta^T x + \\alpha},$\n",
"so that, for $y=0,1,2,\\dots$,\n",
"$$\n",
"\\Pr[ Y_i=y \\mid X_i=x, \\alpha, \\beta] = \\frac{ e^{y(\\beta^T x + \\alpha)} e^{-\\exp\\{ \\beta^T x + \\alpha\\} } }{ y! }.\n",
"$$\n",
"Thus, given independent observations $(X_1,Y_1),(X_2,Y_2),\\dots,(X_n,Y_n)$, the model has log-likelihood\n",
"$$\n",
"\\ell( \\alpha, \\beta )\n",
"=\n",
"\\sum_{i=1}^n \\left[ Y_i (\\beta^T X_i + \\alpha) - \\exp\\{ \\beta^T X_i + \\alpha\\} \\right] - \\sum_{i=1}^n \\log Y_i!~.\n",
"$$\n",
"Since the $\\log Y_i$ terms do not depend on the parameters $\\alpha, \\beta$, for the purposes of optimizing the parameters, we can just consider the loss function\n",
"$$\n",
"-\\sum_{i=1}^n \\left[ Y_i (\\beta^T X_i + \\alpha) - \\exp\\{ \\beta^T X_i + \\alpha\\} \\right]\n",
"$$\n",
"\n",
"Our goal is to implement this model, and then use TF's built-in optimizers to maximize this log-likelihood with respect to $\\alpha$ and $\\beta$.\n",
"\n",
"So, whereas above we chose a loss that corresponded to a squared error between our predicted responses and the true ones, now our loss function is the (negative!) log-likelihood of the model.\n",
"\n",
"### Implementing the model\n",
"\n",
"First and foremost, we need a way to translate a predictor $X_i$, $\\alpha$ and $\\beta$ into a value for $\\lambda(X_i)$.\n",
"Implement a Python function `poislambda` that takes three arguments: `x`, `alpha` and `beta` and returns $\\lambda(x) = \\exp\\{ \\beta^T x + \\alpha \\}$.\n",
"\n",
"Then, cast this function to a `tf.function` object afterwards so that it plays nicely with TensorFlow in the next steps (this is already done for you in the code block below).\n",
"\n",
"Hint: the exponential in TF is `tf.math.exp`."
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [],
"source": [
"def poislambda( x, alpha, beta ):\n",
" pass\n",
" \n",
"poislambda = tf.function(poislambda)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Use the code block below to test your code.\n",
"Note that this code makes specific assumptions about the shape of `alpha` and `beta`.\n",
"The result should be (up to floating point error)"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
""
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tf.math.exp( tf.constant( [[2.0],[6.0],[10.0]], dtype=tf.float32 ) )"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [],
"source": [
"# TEST CODE. Result should be like the tensor above.\n",
"x = tf.constant( [[1,2],[3,4],[5,6]], dtype=tf.float32 )\n",
"beta = tf.Variable( [[1],[1]], dtype=tf.float32 )\n",
"alpha = tf.Variable( [-1], dtype=tf.float32 )\n",
"\n",
"poislambda( x, alpha, beta )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, let's put all this into a class that extends the `tf.Module` class, and has two methods:\n",
"\n",
"- `__init__` : should take an optional `name` argument (a name for the specific model instance that we create; defaults to `None`) and an optional `dim` argument (the dimension of the predictors; defaults to `1`). This should include a call to super().__init__ to make sure that we inherit all the standard intiialization for the `tf.Module` class.\n",
"- `__call__` : takes an argument `x`, the data, and returns $\\lambda(x)$.\n",
"\n",
"Your class should have instance attributes `alpha` and `beta` that are the Tensor objects encoding the parameters $\\alpha$ and $\\beta$ above."
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"class PoissonRegressionModel(tf.Module):\n",
" def __init__(self, name=None, dim=1):\n",
" pass\n",
" \n",
" def poislambda(self, x):\n",
" pass\n",
" \n",
" def __call__(self, x):\n",
" pass"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The test code below should return the following tensor (up to floating point):"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
""
]
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tf.constant( np.array([[1.6487212],[4.481689 ], [7.389056 ]]), dtype=tf.float32)"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [
{
"ename": "AttributeError",
"evalue": "'PoissonRegressionModel' object has no attribute 'alpha'",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mAttributeError\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[0mprm\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mPoissonRegressionModel\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mname\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m'prtest'\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 2\u001b[0;31m \u001b[0mprm\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0malpha\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0massign\u001b[0m\u001b[0;34m(\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0;36m1.0\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 3\u001b[0m \u001b[0mprm\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mbeta\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0massign\u001b[0m\u001b[0;34m(\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m1.0\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0mprm\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtf\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mconstant\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m-\u001b[0m\u001b[0;36m0.5\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m0.5\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m1.0\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mdtype\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mtf\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mfloat32\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;31mAttributeError\u001b[0m: 'PoissonRegressionModel' object has no attribute 'alpha'"
]
}
],
"source": [
"prm = PoissonRegressionModel(name='prtest')\n",
"prm.alpha.assign( [1.0] )\n",
"prm.beta.assign( [[1.0]] )\n",
"prm(tf.constant([[-0.5],[0.5],[1.0]], dtype=tf.float32))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Implementing the loss\n",
"\n",
"Now we need to implement our loss function.\n",
"Again, whereas in linear regression we had a loss given by squared residuals, now our loss is the negative log-likelihood,\n",
"$$\n",
"-\\sum_{i=1}^n \\left[ Y_i (\\beta^T X_i + \\alpha) - \\exp\\{ \\beta^T X_i + \\alpha\\} \\right]\n",
"$$\n",
"\n",
"Define a function `poisloss` that takes arguments `y` and `lambda` (in that order) and returns the negative log-likeihood of the data when $\\lambda = \\exp\\{\\beta^T X + \\alpha\\}$ above.\n",
"Observe that if $\\lambda = \\exp\\{\\beta^T X + \\alpha\\}$, then $Y(\\beta^T X + \\alpha) = \\log \\lambda^Y = Y \\log \\lambda$."
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {},
"outputs": [],
"source": [
"def poisloss( y, lam ):\n",
" pass"
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {},
"outputs": [],
"source": [
"# TEST: this should evaluate to 67095.84 (up to floating point arithmetic)\n",
"x = tf.constant( [[1,2],[3,4],[5,6]], dtype=tf.float32 )\n",
"y= tf.constant( [2,4,6], dtype=tf.float32 )\n",
"beta = tf.Variable( [[1],[1]], dtype=tf.float32 )\n",
"alpha = tf.Variable( [-1], dtype=tf.float32 )\n",
"\n",
"poisloss( y, poislambda( x, alpha, beta ) )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Implementing gradient descent\n",
"\n",
"Okay, so we've got our loss function.\n",
"\n",
"Now we need to do gradient descent with respect to that loss function.\n",
"\n",
"In the old days (\"when I was your age...\", but seriously-- this was back when I was in graduate school, not even ten years ago), you had to write down our model, take derivatives (not always easy to do!) and derive gradient update questions by hand.\n",
"\n",
"Now, TF will do all of this for us, taking advantage of all the progress we've made on autodifferentiation in the past decade (we'll talk a bunch about this on Wednesday, because it's really cool).\n",
"\n",
"Define a function `poistrain` that has the same signature as the `train` function from lecture (reproduced below), but which performs a single gradient step on our new Poisson regression model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"class LinearModel(tf.Module):\n",
" def __init__(self, name=None):\n",
" super().__init__(name=name)\n",
" self.W = tf.Variable([0.5], dtype=tf.float32, name=\"slope\")\n",
" self.b = tf.Variable([-1.0], dtype=tf.float32, name=\"intercept\")\n",
" def __call__(self, x):\n",
" # Note: we could use our function linear_model here, if we wanted.\n",
" return self.W * x + self.b\n",
"\n",
"def train(model, x, y, learning_rate):\n",
" with tf.GradientTape() as t:\n",
" current_loss = loss(y, model(x))\n",
" \n",
" (dW, db) = t.gradient(current_loss, [model.W, model.b])\n",
" \n",
" model.W.assign_sub( learning_rate*dW )\n",
" model.b.assign_sub( learning_rate*db )"
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {},
"outputs": [],
"source": [
"def poistrain(model, x, y, learning_rate):\n",
" \n",
" pass"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We're almost there! Now, we need to actually train our model.\n",
"Of course, to do that, we need data.\n",
"I'll spare you having to write yet another data-generation function like we did in our discussion of `sklearn`."
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"\n",
"def poislambda( x, alpha, beta ):\n",
" return tf.math.exp( x@beta + alpha )\n",
"poislambda = tf.function(poislambda)\n",
"\n",
"def generate_pois_reg( n, alpha, beta ):\n",
" rng = np.random.default_rng()\n",
" X = np.random.uniform(-2,2,size=(n,beta.shape[0]) )\n",
" X = np.float32(X)\n",
" lam = poislambda( X, alpha, beta)\n",
" Y = np.random.poisson( lam )\n",
" return (X,Y)"
]
},
{
"cell_type": "code",
"execution_count": 31,
"metadata": {},
"outputs": [],
"source": [
"alphatrue = tf.constant( [1.618], dtype=tf.float32 )\n",
"betatrue = tf.constant( [[-1.0],[0.5],[0.75]], dtype=tf.float32 )\n",
"n = 5000\n",
"(X, Y) = generate_pois_reg( n, alphatrue, betatrue )"
]
},
{
"cell_type": "code",
"execution_count": 32,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
""
]
},
"execution_count": 32,
"metadata": {},
"output_type": "execute_result"
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAXcAAAD5CAYAAADcDXXiAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjIsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8li6FKAAAeeklEQVR4nO3df4xd9Xnn8ffHw8UZE9QxxSB7MLGLCCscJ5505LDrVdXQBBOi4AndEGc3FatFpX8QNaRZK0OCFmcVFm9dSCvtNpKzQWUbGnCBTJyQxiX8UJQoQMeZMcaAF7cQ8LWFnQYnIUxgbD/7xz1jX8+c+2Pm/jjn3vm8pNHce+659zxzwM+c+X6f83wVEZiZWXdZkHUAZmbWfE7uZmZdyMndzKwLObmbmXUhJ3czsy7k5G5m1oXOqLWDpLcBPwAWJvvfHxG3StoM/DFwJNn18xHx3eQ9NwPXA8eBP42IndWOce6558aKFSvm+jOYmc1Lu3bt+llELEl7rWZyB94ELo+I1yUVgB9K+ofktS9HxF+U7yzpUmAjsApYBnxf0jsj4nilA6xYsYLR0dF6fhYzM0tI+mml12oOy0TJ68nTQvJV7c6nDcC9EfFmRLwI7AfWziJeMzNrUF1j7pJ6JI0Dh4GHI+LJ5KVPSXpa0l2SFifb+oFXyt5+INk2/TNvkDQqafTIkSPTXzYzswbUldwj4nhErAEuANZKehfwFeAiYA1wCLgj2V1pH5HymdsiYjAiBpcsSR0yMjOzOZpVtUxEHAUeB66MiFeTpH8C+Cqnhl4OAMvL3nYBcLAJsZqZWZ1qJndJSyT1JY97gQ8Az0taWrbbR4Fnksc7gI2SFkpaCVwMPNXcsM3MrJp6qmWWAndL6qH0y2B7RHxH0t9KWkNpyOUl4E8AImKvpO3As8Ax4MZqlTJmZvPRyFiRrTv3cfDoBMv6etm0/hKGBmZMT86Z8tDyd3BwMFwKaWbzxchYkZsf3MPE5Knr3t5CD7dfs3pWCV7SrogYTHvNd6iambXZ1p37TkvsABOTx9m6c1/TjlHPsIyZWUdp9ZBHow4enZjV9rnwlbuZdZWpIY/i0QkCKB6d4OYH9zAyVsw6tJOW9fXOavtcOLmbWVdpx5BHozatv4TeQs9p23oLPWxaf0nTjuFhGTPrKu0Y8mjU1BBRK4eOnNzNrKss6+ulmJLImznk0QxDA/0tnQdwcjezrjA1iVo8OoE4vedJs4c8OoGTu5l1vOl14wEnE3x/Dqtl2sHJ3cw6Xtok6lRi/9Hw5dkElTFXy5hZx+uESdR2c3I3s47XjrrxTuPkbmYdrx11453GY+5m1vHaUTfeaZzczawrtLpuvNN4WMbMrAs5uZuZdSEndzOzLuTkbmbWhZzczcy6kJO7mVkXqlkKKeltwA+Ahcn+90fErZLOAe4DVgAvAddGxGvJe24GrgeOA38aETtbEr2ZZSqL5ezyvoReXtRz5f4mcHlEvAdYA1wp6TJgGHgkIi4GHkmeI+lSYCOwCrgS+GtJPamfbGYdK4vl7DphCb28qJnco+T15Gkh+QpgA3B3sv1uYCh5vAG4NyLejIgXgf3A2qZGbWaZy2I5u05YQi8v6hpzl9QjaRw4DDwcEU8C50fEIYDk+3nJ7v3AK2VvP5Bsm/6ZN0galTR65MiRRn4GM8tAFp0Y3f2xfnUl94g4HhFrgAuAtZLeVWV3pX1Eymdui4jBiBhcsmRJfdGaWW5k0YnR3R/rN6tqmYg4CjxOaSz9VUlLAZLvh5PdDgDLy952AXCw4UjNLFey6MTo7o/1q5ncJS2R1Jc87gU+ADwP7ACuS3a7DvhW8ngHsFHSQkkrgYuBp5oduJlla2ign9uvWU1/Xy+itOrR7desbmnlShbH7FSKmDFicvoO0rspTZj2UPplsD0i/ruk3wa2AxcCLwMfi4ifJ+/5AvBfgGPATRHxD9WOMTg4GKOjo43+LGZm84qkXRExmPpareTeDk7uZmazVy25+w5VM7Mu5ORuZtaFnNzNzLqQl9kzs6Zy75d8cHI3s4pmm6iner9MtQiY6v0COMG3mYdlzCzVXJp0ufdLfji5m1mquSTqonu/5IaTu5mlmm2TrpGxYmpjKXDvlyw4uZtZqtk26dq6c9/MDoGUOgm690v7ObmbWarZNumqdEUfeDI1C07uZpZqtk26Kl3R93tIJhNO7mZW0dBAP5vWX8Kyvl4OHp1g6859Fatl3I43X1znbmYVzaZufeq5b2DKByd3M6uoWjlkWtIeGuh3Ms8JD8uYWUVes7RzObmbWUVes7RzObmbWUWeJO1cHnM3s4o8Sdq5nNzNrCpPknYmD8uYmXUhJ3czsy5UM7lLWi7pMUnPSdor6dPJ9s2SipLGk6+ryt5zs6T9kvZJWt/KH8DMzGaqZ8z9GPDZiPiJpLOBXZIeTl77ckT8RfnOki4FNgKrgGXA9yW9MyJOvxPCzMxapuaVe0QcioifJI9/BTwHVJtd2QDcGxFvRsSLwH5gbTOCNTOz+sxqzF3SCmAAeDLZ9ClJT0u6S9LiZFs/8ErZ2w6Q8stA0g2SRiWNHjlyZNaBm5lZZXUnd0lvBx4AboqIXwJfAS4C1gCHgDumdk15+4we/hGxLSIGI2JwyZIlsw7czE4ZGSuybsujrBx+iHVbHq26zqnND3XVuUsqUErs90TEgwAR8WrZ618FvpM8PQAsL3v7BcDBpkRrZjPMpnOjzR/1VMsI+BrwXETcWbZ9adluHwWeSR7vADZKWihpJXAx8FTzQjazcnNZyNq6Xz1X7uuAPwL2SBpPtn0e+ISkNZSGXF4C/gQgIvZK2g48S6nS5kZXypi1jjs3WpqayT0ifkj6OPp3q7znNuC2BuIyszot6+ulmJLIF0isHH7I/WDmKd+hatbh0jo3AhyPIDg1Bu9J1vnFyd2sw01fyLpHM//Qnpg8zk33jbuSZh5xV0izNhkZK7asdW5558aVww9V3M+VNPOHr9zN2mCqXLF4dKLlQyW1VklyJc384ORu1gbtLFesNAZfzpU03c/DMmZt0M5yxfLVk9KqaMBroM4HvnI3a4N2LzQ9NNDPj4Yv55OXXTijjtlroM4PTu5mbZDFQtMjY0Ue2FU8rbGTgD/8XS+bNx94WMasDbJYaDptnD+Ax553F9b5wMndrE3avdC02xLMb07uZmVaWYvebpXaEngydX7wmLtZop216O2QxTi/5Yev3M0S1WrRG7l6z+qvgSzG+S0/nNzNEq0Yo856IY12j/NbfnhYxizRilp0L6RhWXFyN0u0Yoy60h2irlixVvOwjFmi2WPUI2NFRMrq8LhixVrPyd2sTDPHqLfu3Jea2AWuWLGW87CMWYtUGnoJ3EvdWs/J3axFKg299HtIxtqgZnKXtFzSY5Kek7RX0qeT7edIeljSC8n3xWXvuVnSfkn7JK1v5Q9glle+iciyVM+Y+zHgsxHxE0lnA7skPQz8Z+CRiNgiaRgYBj4n6VJgI7AKWAZ8X9I7I+J4hc83a7t23Fjkm4gsSzWTe0QcAg4lj38l6TmgH9gA/H6y293A48Dnku33RsSbwIuS9gNrgR83O3izuWjnjUW+iciyMqsxd0krgAHgSeD8JPFP/QI4L9mtH3il7G0Hkm3TP+sGSaOSRo8ccQtSa59m3Vg0MlZk3ZZHWTn8EOu2PNqxPWisO9Wd3CW9HXgAuCkifllt15RtMyrCImJbRAxGxOCSJUvqDcOsYc1oM9BtTcas+9SV3CUVKCX2eyLiwWTzq5KWJq8vBQ4n2w8Ay8vefgFwsDnhmjWuUhVL36JC3Z/htgKWd/VUywj4GvBcRNxZ9tIO4Lrk8XXAt8q2b5S0UNJK4GLgqeaFbNaYTesvodAz8w/M139zrO4rby+EYXlXz5X7OuCPgMsljSdfVwFbgA9KegH4YPKciNgLbAeeBb4H3OhKGWuWZoxzDw30c9aZM2sJJk8Em3fsresz2r3gtdls1VMt80PSx9EB/qDCe24DbmsgLrMZmlnl8ouJydTtRycmGRkrMjTQX7VcctP6S06LBaCwQLzx1jFWDj/kskfLnO9QtY7RzHHualfYW3fuS50w/cx949wycuqXye3XrD55t6lUuvJ/7Y1JT7BaLji5W8do5jh3tbtEDx6dSP1FEsA9T7x8MmEPDfSfvAs1UjqEeYLVsuTkbh2jmePcQwP9LK5QHbOsr7dq06/yhJ32S6CcJ1gtK07u1jFm06ulnonXWz+yquLnVfuFUZ6wayVvT7BaVtzP3TpGvb1a6p14nf55fYsKRMBn7hvnt3or17yXJ+xlfb0VV1tykzDLkpO7dZR6erVUm3id/t6p55t37OW1N05V0BydmExdRWl6wk6rmgHo6y2w+epVrpaxzDi5W9epNFRSPDrBiuGHAFi8qMCtH1kFkJqcYWZiF/CHv9tf9erfJZCWF07u1nWqDZVMee2NSTbdv5uzzjyj6oRouQAee35mkzt3frQ88oSqdZ20idc0k8eDoxVuZqrE1S/WKXzlbl1n+lBJ2iLVc+XqF+sUTu6WW42sllQ+VLJuy6MVh2kWLyrwm8kTdQ3NuPrFOomHZSyXmtkvfdP6SygsmNkeqWeBuPUjq7j9mtWpNzQVFojFiwqI0qLWt1+z2mPr1jF85W65VKmc8bPbd/OZ+8ZndSU/NNDP6E9/ztefePm07QvKXh8a6OeWkT1848lXOB5Bj8TH1y7nS0Orm/UjmbWVr9wtlypNXB6PmNOVfFqVy+SJONlKYGSsyAO7ihxPmsQcj+CBXUU3/rKO5eRuuVTPxOVsGnPVajrmlZWs2zi5Wy69/9/Ut65uvaWJtZqOeWUl6zZO7pZLacMoaeotTazVdMwrK1m38YSqZS6t5LGeK+bZrHxUq01AWo8Ylz5aJ1OkrTLQZoODgzE6Opp1GNZmI2NFNu/YO+Mu0d5CDwvPWFD17tFKTb0aKVecXi3zife5WsbyTdKuiBhMe83DMpaJqTr2tAQ+MXmcyeMnKr43LbFPvW+uE6CulrFuUzO5S7pL0mFJz5Rt2yypKGk8+bqq7LWbJe2XtE/S+lYFbp2t1gpGv36r8mvV/tac6wSoq2Ws29Rz5f43wJUp278cEWuSr+8CSLoU2AisSt7z15Jqd3CyeadVVShznQB1tYx1m5rJPSJ+APy8zs/bANwbEW9GxIvAfmBtA/FZl+qrsH5pLTObCJzSyASoq2Ws2zQy5v4pSU8nwzaLk239wCtl+xxIts0g6QZJo5JGjxypr+zN8q+etUtHxoq8/ptjc/r8f3fROantfBcvKjQ0mTqb9VnNOsFck/tXgIuANcAh4I5ke9qFVeoQaURsi4jBiBhcsqS+G1Ys3+pt9rV15z4mT8ytSuulf53g9mtW09/Xe7Kh1ycvu5BFZ57BZ+4br/gLpZahgf4Zn+tGYdbJ5lTnHhGvTj2W9FXgO8nTA8Dysl0vAA7OOTrrKJUmJf9s+zhwqta8kXHsg0cnTmvnW+9i2PXwikrWTeZ05S5padnTjwJTlTQ7gI2SFkpaCVwMPNVYiNYpKiXtEwGb7t998oq6kXHs6e91lYtZunpKIb8B/Bi4RNIBSdcDfy5pj6SngfcDnwGIiL3AduBZ4HvAjRFR3wKV1vGqJe3J48Fnt+9m5fBDvPHWsTldVaSNgVdbDHuuQzRm3aDmsExEfCJl89eq7H8bcFsjQVk+1VoZadP6S7jpvvGK75+6Qei1N2a3bilAX2+BzVevmjFsUm0x7EaGaMw6ne9QtbrUM1k6NNBPX+/cShyr6estMH7rFakJutZi2B6isfnKyd3qUu/Y9uarV6UuadeIaj1myqtcKvGNSDYfOblbXeq9g3NooJ+tH3tPU6/ge1T9l8XQQD8/Gr68YoL3jUg2Hzm5W11mcwfn0EA/47dewUtbPlz1irpexyOq3hQ1xTcimZ3i5G51qbQyUq0Vk5qVWOtZN9U3Ipmd4sU6rC6VVkaqtWLS0EA/n3/wad6YrNzCdzamxvmrLcrhZG7m5G6klzjCqVWL+hYVKpYvVipDLP/syeOzazXQ11vgFxOTFVv7eoLUrDYn93ku7fb9TffvhuBk/5dqdemitILRY88fmVH/PjJW5LPbd5+sb6/XWQvPYPzWK1i35dHUXx6eIDWrzcm9i9W66QjSSxxnc6UdwD1PvHzyKntqXHz0pz8/bWWj2Zi6Mve6pmZz5+TepeptqNWMIY7p6Xti8jj3PPkyc12ed+rKvNai1mZWmZN7l6p201F5cqx2+34j5prYexbotCtzT5CazY2Te5eqddPRyFiRL35775z6vLSSa3PNmsP/lrpUtZuORsaKbLp/d+4SO5Qmcd0LxqxxTu5dqtrdmlt37pt1eWI7udTRrHFO7l2q2t2arRhjbyaXOpo1zmPuXWz6ZOTU4tV55lJHs+Zwcp8nppdGtlPPAnG8jgWx+13qaNY0Tu7zRFppZDssEJy98IyqPdkB/vLja5zUzZrIyb0Lpd2ZmtUk5YmAX9RI7H29BSd2syZzcu8yle5M/a3eQs2r51aY6udeaRK3t9DD5qtXtTMks3mhZrWMpLskHZb0TNm2cyQ9LOmF5PvistdulrRf0j5J61sVuKWrdGdqFol9anK00jqnixcV3G/drEXquXL/G+B/Af+3bNsw8EhEbJE0nDz/nKRLgY3AKmAZ8H1J74yI9g/2zkMjY8XclDmmTY66R4xZ+9RM7hHxA0krpm3eAPx+8vhu4HHgc8n2eyPiTeBFSfuBtcCPmxOuVTI1HJMH/X29/Gj48tO2uUeMWXvN9Sam8yPiEEDy/bxkez/wStl+B5JtM0i6QdKopNEjR6qv5mO1ZVUNM53r1M3yodkTqmnL1KcWOEfENmAbwODgYH7vhc+hkbEim3fsPTmOvrjKSknt0CNxIsLDLWY5Mtfk/qqkpRFxSNJS4HCy/QCwvGy/C4CDjQRopxsZK7Lp73efXCUJqq+U1A4nInhxy4czjcHMTjfXYZkdwHXJ4+uAb5Vt3yhpoaSVwMXAU42FaOW++O29pyX2PHAvGLP8qXnlLukblCZPz5V0ALgV2AJsl3Q98DLwMYCI2CtpO/AscAy40ZUyjSm/ISmrWvVqPMZulk+KuS6Z00SDg4MxOjqadRi5k2U/mHq4F4xZtiTtiojBtNd8h2rOlF+pL5DmtMB0q6WVOppZvji558gtI3u454mXT5YX5TGxCzwMY9YBvFhHToyMFU9L7Hn1ny670MMwZh3AV+45sXXnvlwndlFK7F8aWp11KGZWByf3nMjDuqE9Fcb4eyTuuPY9vmI36yAelsmJRWfO7JrYTr2FHj7xvuWpi2o7sZt1Hl+558DIWJFfv5VtueNU693Bd5zj7o1mXcDJPQe27tyX6fHLGwK5e6NZd/CwTA60c7y9Ume3rH/BmFlzObnnQG+hPf8Z+vt6K1bk5GFC18yax8k9ByaOnWj5MaZuPuqv0OTLzb/MuovH3NvklpE9fP2JlzM7fvnNR9P71bj5l1n3cXJvg6wTO8DgO84BOJngXRFj1t2c3Nsg68QOpWQ+lcBdEWPW/Zzcm6S8m2P51fAtI/lYtNoTpmbzi5N7E0zvu148OsFN941z033jbY+lt7CAicmZE7SeMDWbX1wt0wRbd+7LzYIabyv0pLYQ8ISp2fziK/c5Kh+GyVM3x6NvTPLlj6/xhKnZPOfkPgd5WP6uUgfHvkUFT5iamYdl5iLrYZgFgjuufQ+FnpnNBF7/zTFGxooZRGVmeeLkPgdZV56ciFI541lnzvzDa/JEuE+MmTWW3CW9JGmPpHFJo8m2cyQ9LOmF5Pvi5oSaH1lXnky1EPjFxGTq61n/8jGz7DVjzP39EfGzsufDwCMRsUXScPL8c004TmZGxop88dt7ee2NUjJtU5+viqYqX5b19VJMSeRZ//Ixs+y1Ik1tAO5OHt8NDLXgGG0zMlbkz7aPn0zsACll5G2zOJkwhVKSd9mjmaVpNLkH8I+Sdkm6Idl2fkQcAki+n5f2Rkk3SBqVNHrkyJEGw2idzTv2ciIntY6FHnHrR1adfD400M/t16ymv68XURqumVpRyczmt0aHZdZFxEFJ5wEPS3q+3jdGxDZgG8Dg4GBO0udMRyuMa7fbAsHW/zBzLVOXPZpZmoau3CPiYPL9MPBNYC3wqqSlAMn3w40GmZW89IXpLfRw57VrnMTNrG5zTu6SzpJ09tRj4ArgGWAHcF2y23XAtxoNMiv3PJl9N8e+3oKHWsxs1hoZljkf+Kakqc/5u4j4nqR/ArZLuh54GfhY42G238hYkZQbQFtugUp17P1uG2BmDZhzco+IfwHek7L9X4E/aCSoLE31jEkrMWy1T152IV8aWt3245pZ93FvGbJN6FN6Cwuc2M2saeZlch8ZK7J5x97cVMIA3H7Nu7MOwcy6yLxL7iNjRTb9/W4m81K8Tmk4xmPrZtZM8y65b96xN7PELsGiQg+/fqvUUbKvt8Dmq1c5sZtZ082r5H7LyJ5Mh2K+7Fp1M2uTedPy95aRPXz9iezq1vt6C07sZtY2XX/lPr2jYxZ6Cz1svnpV7R3NzJqkq5N7XpbD8x2mZtZuXT0sk/VyeL2FHu64dmazLzOzVuvKK/dbRvbwjSdfSV1AutUWFRYwMXmCZW4fYGYZ6rrkntXE6bqLzuGeP/63bT+umVmarkvuf9fmTo5nndnDbR/1mLqZ5UtXJfeRsWJbV036y4+7bt3M8qlrJlRHxorcdN942463QDixm1ludUVyb3diB3KzrqqZWZquSO7tTuxQWkzDzCyvOj65rxh+qOXHWKDTn/cWeti0/pKWH9fMbK46Orm3I7EvKizgzmvX0N/XiyhdsfuOUzPLu46tlvngnY+3/BiFHvE/rnk3QwP9TuZm1lE6Nrm/cPjXLflcAYEXqDazztay5C7pSuCvgB7g/0TEllYdqxm8OLWZdZOWjLlL6gH+N/Ah4FLgE5IubcWxmsGJ3cy6Tauu3NcC+yPiXwAk3QtsAJ5t0fHmxK0DzKxbtSq59wOvlD0/ALyvfAdJNwA3AFx44YUtCqOy/r5efjR8eduPa2bWDq0qhVTKttPu6YyIbRExGBGDS5YsaVEY6VynbmbdrlXJ/QCwvOz5BcDBFh2rpsICWLyo4Dp1M5s3WjUs80/AxZJWAkVgI/Afm3mAl7Z8uOZNTJ4oNbP5qiXJPSKOSfoUsJNSKeRdEbG32cd5acuHm/2RZmZdoWV17hHxXeC7rfp8MzOrrKN7y5iZWTondzOzLuTkbmbWhZzczcy6kCKyXy9O0hHgp3N8+7nAz5oYTis4xuZwjM2R9xjzHh/kJ8Z3RETqXaC5SO6NkDQaEYNZx1GNY2wOx9gceY8x7/FBZ8ToYRkzsy7k5G5m1oW6IblvyzqAOjjG5nCMzZH3GPMeH3RAjB0/5m5mZjN1w5W7mZlN4+RuZtaFOjq5S7pS0j5J+yUNZx3PFEkvSdojaVzSaLLtHEkPS3oh+b64zTHdJemwpGfKtlWMSdLNyXndJ2l9RvFtllRMzuO4pKuyii855nJJj0l6TtJeSZ9OtufpPFaKMTfnUtLbJD0laXcS4xeT7bk4j1Xiy805rEtEdOQXpVbC/wz8DnAmsBu4NOu4ktheAs6dtu3PgeHk8TDwP9sc0+8B7wWeqRUTpUXNdwMLgZXJee7JIL7NwH9N2bft8SXHXQq8N3l8NvD/kljydB4rxZibc0lppba3J48LwJPAZXk5j1Xiy805rOerk6/cTy7CHRFvAVOLcOfVBuDu5PHdwFA7Dx4RPwB+XmdMG4B7I+LNiHgR2E/pfLc7vkraHh9ARByKiJ8kj38FPEdpveA8ncdKMVaSRYwREa8nTwvJV5CT81glvkoy+f+xlk5O7mmLcOdl7bwA/lHSrmQhcIDzI+IQlP4BAudlFt0plWLK07n9lKSnk2GbqT/TM49P0gpggNJVXS7P47QYIUfnUlKPpHHgMPBwROTqPFaID3J0Dmvp5ORecxHuDK2LiPcCHwJulPR7WQc0S3k5t18BLgLWAIeAO5LtmcYn6e3AA8BNEfHLarumbGtLnCkx5upcRsTxiFhDaX3ltZLeVWX3tsdYIb5cncNaOjm552oR7nIRcTD5fhj4JqU/0V6VtBQg+X44uwhPqhRTLs5tRLya/CM7AXyVU3/qZhafpAKlpHlPRDyYbM7VeUyLMY/nMonrKPA4cCU5O4/T48vrOaykk5P7yUW4JZ1JaRHuHRnHhKSzJJ099Ri4AniGUmzXJbtdB3wrmwhPUymmHcBGSQtVWuT8YuCpdgc39Q898VFK5zGz+CQJ+BrwXETcWfZSbs5jpRjzdC4lLZHUlzzuBT4APE9OzmOl+PJ0DuuS9YxuI1/AVZSqAf4Z+ELW8SQx/Q6lmfPdwN6puIDfBh4BXki+n9PmuL5B6U/JSUpXGtdXiwn4QnJe9wEfyii+vwX2AE9T+ge0NKv4kmP+e0p/bj8NjCdfV+XsPFaKMTfnEng3MJbE8gzw35LtuTiPVeLLzTms58vtB8zMulAnD8uYmVkFTu5mZl3Iyd3MrAs5uZuZdSEndzOzLuTkbmbWhZzczcy60P8HIzxDjKm1UbwAAAAASUVORK5CYII=\n",
"text/plain": [
"