3
3
..
4
4
_href from docs/source/index.rst
5
5
6
- ===============
7
- PyMC and Aesara
8
- ===============
6
+ =================
7
+ PyMC and PyTensor
8
+ =================
9
9
10
- What is Aesara
11
- ==============
10
+ What is PyTensor
11
+ ================
12
12
13
- Aesara is a package that allows us to define functions involving array
13
+ PyTensor is a package that allows us to define functions involving array
14
14
operations and linear algebra. When we define a PyMC model, we implicitly
15
- build up an Aesara function from the space of our parameters to
15
+ build up an PyTensor function from the space of our parameters to
16
16
their posterior probability density up to a constant factor. We then use
17
17
symbolic manipulations of this function to also get access to its gradient.
18
18
19
- For a thorough introduction to Aesara see the
20
- :doc: `aesara docs <aesara :index >`,
19
+ For a thorough introduction to PyTensor see the
20
+ :doc: `pytensor docs <pytensor :index >`,
21
21
but for the most part you don't need detailed knowledge about it as long
22
22
as you are not trying to define new distributions or other extensions
23
23
of PyMC. But let's look at a simple example to get a rough
@@ -33,8 +33,8 @@ arbitrarily chosen) function
33
33
First, we need to define symbolic variables for our inputs (this
34
34
is similar to eg SymPy's `Symbol `)::
35
35
36
- import aesara
37
- import aesara .tensor as at
36
+ import pytensor
37
+ import pytensor .tensor as at
38
38
# We don't specify the dtype of our input variables, so it
39
39
# defaults to using float64 without any special config.
40
40
a = at.scalar('a')
@@ -56,16 +56,16 @@ do to compute the output::
56
56
of the exponential of `inner `. Somewhat surprisingly, it
57
57
would also have worked if we used `np.exp `. This is because numpy
58
58
gives objects it operates on a chance to define the results of
59
- operations themselves. Aesara variables do this for a large number
60
- of operations. We usually still prefer the Aesara
59
+ operations themselves. PyTensor variables do this for a large number
60
+ of operations. We usually still prefer the PyTensor
61
61
functions instead of the numpy versions, as that makes it clear that
62
62
we are working with symbolic input instead of plain arrays.
63
63
64
- Now we can tell Aesara to build a function that does this computation.
65
- With a typical configuration, Aesara generates C code, compiles it,
64
+ Now we can tell PyTensor to build a function that does this computation.
65
+ With a typical configuration, PyTensor generates C code, compiles it,
66
66
and creates a python function which wraps the C function::
67
67
68
- func = aesara .function([a, x, y], [out])
68
+ func = pytensor .function([a, x, y], [out])
69
69
70
70
We can call this function with actual arrays as many times as we want::
71
71
@@ -75,15 +75,15 @@ We can call this function with actual arrays as many times as we want::
75
75
76
76
out = func(a_val, x_vals, y_vals)
77
77
78
- For the most part the symbolic Aesara variables can be operated on
79
- like NumPy arrays. Most NumPy functions are available in `aesara .tensor `
78
+ For the most part the symbolic PyTensor variables can be operated on
79
+ like NumPy arrays. Most NumPy functions are available in `pytensor .tensor `
80
80
(which is typically imported as `at `). A lot of linear algebra operations
81
81
can be found in `at.nlinalg ` and `at.slinalg ` (the NumPy and SciPy
82
82
operations respectively). Some support for sparse matrices is available
83
- in `aesara .sparse `. For a detailed overview of available operations,
84
- see :mod: `the aesara api docs <aesara .tensor> `.
83
+ in `pytensor .sparse `. For a detailed overview of available operations,
84
+ see :mod: `the pytensor api docs <pytensor .tensor> `.
85
85
86
- A notable exception where Aesara variables do *not * behave like
86
+ A notable exception where PyTensor variables do *not * behave like
87
87
NumPy arrays are operations involving conditional execution.
88
88
89
89
Code like this won't work as expected::
@@ -123,16 +123,16 @@ Changing elements of an array is possible using `at.set_subtensor`::
123
123
a = at.vector('a')
124
124
b = at.set_subtensor(a[:10], 1)
125
125
126
- # is roughly equivalent to this (although aesara avoids
126
+ # is roughly equivalent to this (although pytensor avoids
127
127
# the copy if `a` isn't used anymore)
128
128
a = np.random.randn(10)
129
129
b = a.copy()
130
130
b[:10] = 1
131
131
132
- How PyMC uses Aesara
132
+ How PyMC uses PyTensor
133
133
====================
134
134
135
- Now that we have a basic understanding of Aesara we can look at what
135
+ Now that we have a basic understanding of PyTensor we can look at what
136
136
happens if we define a PyMC model. Let's look at a simple example::
137
137
138
138
true_mu = 0.1
@@ -159,7 +159,7 @@ where with the normal likelihood :math:`N(x|μ,σ^2)`
159
159
160
160
To build that function we need to keep track of two things: The parameter
161
161
space (the *free variables *) and the logp function. For each free variable
162
- we generate an Aesara variable. And for each variable (observed or otherwise)
162
+ we generate an PyTensor variable. And for each variable (observed or otherwise)
163
163
we add a term to the global logp. In the background something similar to
164
164
this is happening::
165
165
@@ -177,7 +177,7 @@ So calling `pm.Normal()` modifies the model: It changes the logp function
177
177
of the model. If the `observed ` keyword isn't set it also creates a new
178
178
free variable. In contrast, `pm.Normal.dist() ` doesn't care about the model,
179
179
it just creates an object that represents the normal distribution. Calling
180
- `logp ` on this object creates an Aesara variable for the logp probability
180
+ `logp ` on this object creates an PyTensor variable for the logp probability
181
181
or log probability density of the distribution, but again without changing
182
182
the model in any way.
183
183
@@ -209,8 +209,8 @@ is roughly equivalent to this::
209
209
model.add_logp_term(pm.Normal.dist(mu, sigma).logp(data))
210
210
211
211
The return values of the variable constructors are subclasses
212
- of Aesara variables, so when we define a variable we can use any
213
- Aesara operation on them::
212
+ of PyTensor variables, so when we define a variable we can use any
213
+ PyTensor operation on them::
214
214
215
215
design_matrix = np.array([[...]])
216
216
with pm.Model() as model:
0 commit comments