Skip to content

Commit f1646c9

Browse files
pymc-devsricardoV94
pymc-devs
authored andcommitted
Replace Aesara by PyTensor
1 parent 636b957 commit f1646c9

File tree

153 files changed

+1605
-1604
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

153 files changed

+1605
-1604
lines changed

.github/ISSUE_TEMPLATE/bug-report.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ body:
5252
label: "PyMC version information:"
5353
description: >
5454
PyMC/PyMC3 Version:
55-
Aesara/Theano Version:
55+
PyTensor/Aesara Version:
5656
Python Version:
5757
Operating system:
5858
How did you install PyMC/PyMC3: (conda/pip)

.github/workflows/tests.yml

+6-6
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ jobs:
4040
- |
4141
pymc/tests/test_util.py
4242
pymc/tests/distributions/test_logprob.py
43-
pymc/tests/test_aesaraf.py
43+
pymc/tests/test_pytensorf.py
4444
pymc/tests/test_math.py
4545
pymc/tests/backends/test_base.py
4646
pymc/tests/backends/test_ndarray.py
@@ -102,7 +102,7 @@ jobs:
102102
runs-on: ${{ matrix.os }}
103103
env:
104104
TEST_SUBSET: ${{ matrix.test-subset }}
105-
AESARA_FLAGS: floatX=${{ matrix.floatx }},gcc__cxxflags='-march=native'
105+
PYTENSOR_FLAGS: floatX=${{ matrix.floatx }},gcc__cxxflags='-march=native'
106106
defaults:
107107
run:
108108
shell: bash -l {0}
@@ -173,7 +173,7 @@ jobs:
173173
runs-on: ${{ matrix.os }}
174174
env:
175175
TEST_SUBSET: ${{ matrix.test-subset }}
176-
AESARA_FLAGS: floatX=${{ matrix.floatx }},gcc__cxxflags='-march=core2'
176+
PYTENSOR_FLAGS: floatX=${{ matrix.floatx }},gcc__cxxflags='-march=core2'
177177
defaults:
178178
run:
179179
shell: cmd
@@ -252,7 +252,7 @@ jobs:
252252
runs-on: ${{ matrix.os }}
253253
env:
254254
TEST_SUBSET: ${{ matrix.test-subset }}
255-
AESARA_FLAGS: floatX=${{ matrix.floatx }},gcc__cxxflags='-march=native'
255+
PYTENSOR_FLAGS: floatX=${{ matrix.floatx }},gcc__cxxflags='-march=native'
256256
defaults:
257257
run:
258258
shell: bash -l {0}
@@ -317,7 +317,7 @@ jobs:
317317
runs-on: ${{ matrix.os }}
318318
env:
319319
TEST_SUBSET: ${{ matrix.test-subset }}
320-
AESARA_FLAGS: floatX=${{ matrix.floatx }},gcc__cxxflags='-march=native'
320+
PYTENSOR_FLAGS: floatX=${{ matrix.floatx }},gcc__cxxflags='-march=native'
321321
defaults:
322322
run:
323323
shell: bash -l {0}
@@ -387,7 +387,7 @@ jobs:
387387
runs-on: ${{ matrix.os }}
388388
env:
389389
TEST_SUBSET: ${{ matrix.test-subset }}
390-
AESARA_FLAGS: floatX=${{ matrix.floatx }},gcc__cxxflags='-march=core2'
390+
PYTENSOR_FLAGS: floatX=${{ matrix.floatx }},gcc__cxxflags='-march=core2'
391391
defaults:
392392
run:
393393
shell: cmd

.pre-commit-config.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ repos:
8484
entry: >
8585
(?x)(arviz-devs.github.io|
8686
python.arviz.org|
87-
aesara.readthedocs.io|
87+
pytensor.readthedocs.io|
8888
pymc-experimental.readthedocs.io|
8989
docs.pymc.io|
9090
www.pymc.io|

ARCHITECTURE.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ It is easier to start with functionality that is not present in PyMC but
2525
rather deferred to outside libraries. If seeking to understand any
2626
of the topics below refer to that specific library
2727

28-
### Aesara
28+
### PyTensor
2929
* Gradient computation
3030
* Random number generation
3131
* Low level tensor operation definition
@@ -62,7 +62,7 @@ In no particular order they are
6262

6363
* `ContextMeta`: The context manager that enables the `with pm.Model() as model` syntax
6464
* {class}`~pymc.Factor`: Defines the methods for the various logprobs for models
65-
* `ValueGrad` which handles the value and gradient and is the main connection point to Aesara
65+
* `ValueGrad` which handles the value and gradient and is the main connection point to PyTensor
6666
* `Deterministic` and `Potential`: Definitions for two pieces of functionality useful in some model definitions
6767

6868
## distributions/
@@ -74,7 +74,7 @@ Important modules to note are
7474
a random variable distribution from a likelihood distribution.
7575

7676
* `logprob.py`: This contains the log probability logic for the distributions themselves.
77-
The log probability calculation is deferred to Aesara
77+
The log probability calculation is deferred to PyTensor
7878

7979
* `dist_math.py`: Various convenience operators for distributions.
8080
This includes mathematical operators such as `logpower` or `all_true`methods.

README.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ Features
2626
- **Variational inference**: `ADVI <http://www.jmlr.org/papers/v18/16-107.html>`__
2727
for fast approximate posterior estimation as well as mini-batch ADVI
2828
for large data sets.
29-
- Relies on `Aesara <https://aesara.readthedocs.io/en/latest/>`__ which provides:
29+
- Relies on `PyTensor <https://pytensor.readthedocs.io/en/latest/>`__ which provides:
3030
* Computation optimization and dynamic C or JAX compilation
3131
* NumPy broadcasting and advanced indexing
3232
* Linear algebra operators

benchmarks/benchmarks/benchmarks.py

+3-3
Original file line numberDiff line numberDiff line change
@@ -14,8 +14,8 @@
1414
import time
1515
import timeit
1616

17-
import aesara
18-
import aesara.tensor as at
17+
import pytensor
18+
import pytensor.tensor as at
1919
import arviz as az
2020
import numpy as np
2121
import pandas as pd
@@ -27,7 +27,7 @@ def glm_hierarchical_model(random_seed=123):
2727
"""Sample glm hierarchical model to use in benchmarks"""
2828
np.random.seed(random_seed)
2929
data = pd.read_csv(pm.get_data("radon.csv"))
30-
data["log_radon"] = data["log_radon"].astype(aesara.config.floatX)
30+
data["log_radon"] = data["log_radon"].astype(pytensor.config.floatX)
3131
county_idx = data.county_code.values
3232

3333
n_counties = len(data.county.unique())

conda-envs/environment-dev.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,6 @@ channels:
55
- defaults
66
dependencies:
77
# Base dependencies
8-
- aesara=2.8.8
98
- arviz>=0.13.0
109
- blas
1110
- cachetools>=4.2.1
@@ -15,6 +14,7 @@ dependencies:
1514
- numpy>=1.15.0
1615
- pandas>=0.24.0
1716
- pip
17+
- pytensor=2.8.10
1818
- python-graphviz
1919
- networkx
2020
- scipy>=1.4.1

conda-envs/environment-test.yml

+2-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,6 @@ channels:
55
- defaults
66
dependencies:
77
# Base dependencies
8-
- aesara=2.8.8
98
- arviz>=0.13.0
109
- blas
1110
- cachetools>=4.2.1
@@ -17,6 +16,8 @@ dependencies:
1716
- mkl-service
1817
- numpy>=1.15.0
1918
- pandas>=0.24.0
19+
- pip
20+
- pytensor=2.8.10
2021
- python-graphviz
2122
- networkx
2223
- scipy>=1.4.1

conda-envs/windows-environment-dev.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,6 @@ channels:
55
- defaults
66
dependencies:
77
# Base dependencies (see install guide for Windows)
8-
- aesara=2.8.8
98
- arviz>=0.13.0
109
- blas
1110
- cachetools>=4.2.1
@@ -15,6 +14,7 @@ dependencies:
1514
- numpy>=1.15.0
1615
- pandas>=0.24.0
1716
- pip
17+
- pytensor=2.8.10
1818
- python-graphviz
1919
- networkx
2020
- scipy>=1.4.1

conda-envs/windows-environment-test.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,6 @@ channels:
55
- defaults
66
dependencies:
77
# Base dependencies (see install guide for Windows)
8-
- aesara=2.8.8
98
- arviz>=0.13.0
109
- blas
1110
- cachetools>=4.2.1
@@ -18,6 +17,7 @@ dependencies:
1817
- numpy>=1.15.0
1918
- pandas>=0.24.0
2019
- pip
20+
- pytensor=2.8.10
2121
- python-graphviz
2222
- networkx
2323
- scipy>=1.4.1

docs/source/PyMC_and_Aesara.rst docs/source/PyMC_and_PyTensor.rst

+28-28
Original file line numberDiff line numberDiff line change
@@ -3,21 +3,21 @@
33
..
44
_href from docs/source/index.rst
55
6-
===============
7-
PyMC and Aesara
8-
===============
6+
=================
7+
PyMC and PyTensor
8+
=================
99

10-
What is Aesara
11-
==============
10+
What is PyTensor
11+
================
1212

13-
Aesara is a package that allows us to define functions involving array
13+
PyTensor is a package that allows us to define functions involving array
1414
operations and linear algebra. When we define a PyMC model, we implicitly
15-
build up an Aesara function from the space of our parameters to
15+
build up an PyTensor function from the space of our parameters to
1616
their posterior probability density up to a constant factor. We then use
1717
symbolic manipulations of this function to also get access to its gradient.
1818

19-
For a thorough introduction to Aesara see the
20-
:doc:`aesara docs <aesara:index>`,
19+
For a thorough introduction to PyTensor see the
20+
:doc:`pytensor docs <pytensor:index>`,
2121
but for the most part you don't need detailed knowledge about it as long
2222
as you are not trying to define new distributions or other extensions
2323
of PyMC. But let's look at a simple example to get a rough
@@ -33,8 +33,8 @@ arbitrarily chosen) function
3333
First, we need to define symbolic variables for our inputs (this
3434
is similar to eg SymPy's `Symbol`)::
3535

36-
import aesara
37-
import aesara.tensor as at
36+
import pytensor
37+
import pytensor.tensor as at
3838
# We don't specify the dtype of our input variables, so it
3939
# defaults to using float64 without any special config.
4040
a = at.scalar('a')
@@ -56,16 +56,16 @@ do to compute the output::
5656
of the exponential of `inner`. Somewhat surprisingly, it
5757
would also have worked if we used `np.exp`. This is because numpy
5858
gives objects it operates on a chance to define the results of
59-
operations themselves. Aesara variables do this for a large number
60-
of operations. We usually still prefer the Aesara
59+
operations themselves. PyTensor variables do this for a large number
60+
of operations. We usually still prefer the PyTensor
6161
functions instead of the numpy versions, as that makes it clear that
6262
we are working with symbolic input instead of plain arrays.
6363

64-
Now we can tell Aesara to build a function that does this computation.
65-
With a typical configuration, Aesara generates C code, compiles it,
64+
Now we can tell PyTensor to build a function that does this computation.
65+
With a typical configuration, PyTensor generates C code, compiles it,
6666
and creates a python function which wraps the C function::
6767

68-
func = aesara.function([a, x, y], [out])
68+
func = pytensor.function([a, x, y], [out])
6969

7070
We can call this function with actual arrays as many times as we want::
7171

@@ -75,15 +75,15 @@ We can call this function with actual arrays as many times as we want::
7575

7676
out = func(a_val, x_vals, y_vals)
7777

78-
For the most part the symbolic Aesara variables can be operated on
79-
like NumPy arrays. Most NumPy functions are available in `aesara.tensor`
78+
For the most part the symbolic PyTensor variables can be operated on
79+
like NumPy arrays. Most NumPy functions are available in `pytensor.tensor`
8080
(which is typically imported as `at`). A lot of linear algebra operations
8181
can be found in `at.nlinalg` and `at.slinalg` (the NumPy and SciPy
8282
operations respectively). Some support for sparse matrices is available
83-
in `aesara.sparse`. For a detailed overview of available operations,
84-
see :mod:`the aesara api docs <aesara.tensor>`.
83+
in `pytensor.sparse`. For a detailed overview of available operations,
84+
see :mod:`the pytensor api docs <pytensor.tensor>`.
8585

86-
A notable exception where Aesara variables do *not* behave like
86+
A notable exception where PyTensor variables do *not* behave like
8787
NumPy arrays are operations involving conditional execution.
8888

8989
Code like this won't work as expected::
@@ -123,16 +123,16 @@ Changing elements of an array is possible using `at.set_subtensor`::
123123
a = at.vector('a')
124124
b = at.set_subtensor(a[:10], 1)
125125

126-
# is roughly equivalent to this (although aesara avoids
126+
# is roughly equivalent to this (although pytensor avoids
127127
# the copy if `a` isn't used anymore)
128128
a = np.random.randn(10)
129129
b = a.copy()
130130
b[:10] = 1
131131

132-
How PyMC uses Aesara
132+
How PyMC uses PyTensor
133133
====================
134134

135-
Now that we have a basic understanding of Aesara we can look at what
135+
Now that we have a basic understanding of PyTensor we can look at what
136136
happens if we define a PyMC model. Let's look at a simple example::
137137

138138
true_mu = 0.1
@@ -159,7 +159,7 @@ where with the normal likelihood :math:`N(x|μ,σ^2)`
159159
160160
To build that function we need to keep track of two things: The parameter
161161
space (the *free variables*) and the logp function. For each free variable
162-
we generate an Aesara variable. And for each variable (observed or otherwise)
162+
we generate an PyTensor variable. And for each variable (observed or otherwise)
163163
we add a term to the global logp. In the background something similar to
164164
this is happening::
165165

@@ -177,7 +177,7 @@ So calling `pm.Normal()` modifies the model: It changes the logp function
177177
of the model. If the `observed` keyword isn't set it also creates a new
178178
free variable. In contrast, `pm.Normal.dist()` doesn't care about the model,
179179
it just creates an object that represents the normal distribution. Calling
180-
`logp` on this object creates an Aesara variable for the logp probability
180+
`logp` on this object creates an PyTensor variable for the logp probability
181181
or log probability density of the distribution, but again without changing
182182
the model in any way.
183183

@@ -209,8 +209,8 @@ is roughly equivalent to this::
209209
model.add_logp_term(pm.Normal.dist(mu, sigma).logp(data))
210210

211211
The return values of the variable constructors are subclasses
212-
of Aesara variables, so when we define a variable we can use any
213-
Aesara operation on them::
212+
of PyTensor variables, so when we define a variable we can use any
213+
PyTensor operation on them::
214214

215215
design_matrix = np.array([[...]])
216216
with pm.Model() as model:

docs/source/api.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ API
1818
api/ode
1919
api/tuning
2020
api/math
21-
api/aesaraf
21+
api/pytensorf
2222
api/shape_utils
2323
api/misc
2424

docs/source/api/math.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,8 @@ Math
33
====
44

55
This submodule contains various mathematical functions. Most of them are imported directly
6-
from aesara.tensor (see there for more details). Doing any kind of math with PyMC random
7-
variables, or defining custom likelihoods or priors requires you to use these Aesara
6+
from pytensor.tensor (see there for more details). Doing any kind of math with PyMC random
7+
variables, or defining custom likelihoods or priors requires you to use these PyTensor
88
expressions rather than NumPy or Python code.
99

1010
.. currentmodule:: pymc

docs/source/api/aesaraf.rst docs/source/api/pytensorf.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
Aesara utils
1+
PyTensor utils
22
************
33

44
.. currentmodule:: pymc

docs/source/conf.py

+4-4
Original file line numberDiff line numberDiff line change
@@ -63,8 +63,8 @@
6363
}
6464
# fmt: on
6565
numpydoc_xref_aliases = {
66-
"TensorVariable": ":class:`~aesara.tensor.TensorVariable`",
67-
"RandomVariable": ":class:`~aesara.tensor.random.RandomVariable`",
66+
"TensorVariable": ":class:`~pytensor.tensor.TensorVariable`",
67+
"RandomVariable": ":class:`~pytensor.tensor.random.RandomVariable`",
6868
"ndarray": ":class:`~numpy.ndarray`",
6969
"Covariance": ":mod:`Covariance <pymc.gp.cov>`",
7070
"Mean": ":mod:`Mean <pymc.gp.mean>`",
@@ -74,7 +74,7 @@
7474
"Point": ":class:`~pymc.Point`",
7575
"Model": ":class:`~pymc.Model`",
7676
"SMC_kernel": ":ref:`SMC Kernel <smc_kernels>`",
77-
"Aesara_Op": ":class:`Aesara Op <aesara.graph.op.Op>`",
77+
"PyTensor_Op": ":class:`PyTensor Op <pytensor.graph.op.Op>`",
7878
"tensor_like": ":term:`tensor_like`",
7979
"numpy_Generator": ":class:`~numpy.random.Generator`",
8080
"Distribution": ":ref:`Distribution <api_distributions>`",
@@ -187,7 +187,7 @@
187187
# intersphinx configuration to ease linking arviz docs
188188
intersphinx_mapping = {
189189
"arviz": ("https://python.arviz.org/en/latest/", None),
190-
"aesara": ("https://aesara.readthedocs.io/en/latest/", None),
190+
"pytensor": ("https://pytensor.readthedocs.io/en/latest/", None),
191191
"home": ("https://www.pymc.io", None),
192192
"pmx": ("https://www.pymc.io/projects/experimental/en/latest", None),
193193
"numpy": ("https://numpy.org/doc/stable/", None),

0 commit comments

Comments
 (0)