Skip to content

Commit ea8c28b

Browse files
Rename project and switch to PyTensor
1 parent d527c46 commit ea8c28b

25 files changed

+79
-79
lines changed

.github/workflows/test.yml

+2-2
Original file line numberDiff line numberDiff line change
@@ -45,10 +45,10 @@ jobs:
4545
python-version: ${{matrix.python-version}}
4646
use-mamba: true
4747
use-only-tar-bz2: true # IMPORTANT: This needs to be set for caching to work properly!
48-
- name: Install aesara-federated
48+
- name: Install pytensor-federated
4949
run: |
5050
conda activate aefenv
5151
pip install -e .
5252
- name: Run tests
5353
run: |
54-
pytest -v --cov=./aesara_federated --cov-report xml --cov-report term-missing .
54+
pytest -v --cov=./pytensor_federated --cov-report xml --cov-report term-missing .

.pre-commit-config.yaml

+2-2
Original file line numberDiff line numberDiff line change
@@ -15,9 +15,9 @@ repos:
1515
- id: isort
1616
name: isort
1717
args: ["--profile", "black"]
18-
exclude: (aesara_federated/rpc.py|aesara_federated/npproto/)
18+
exclude: (pytensor_federated/rpc.py|pytensor_federated/npproto/)
1919
- repo: https://github.com/psf/black
2020
rev: 22.3.0
2121
hooks:
2222
- id: black
23-
exclude: (aesara_federated/rpc.py|aesara_federated/npproto/)
23+
exclude: (pytensor_federated/rpc.py|pytensor_federated/npproto/)

README.md

+10-10
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
1-
[![pipeline](https://github.com/michaelosthege/aesara-federated/workflows/test/badge.svg)](https://github.com/michaelosthege/aesara-federated/actions)
1+
[![pipeline](https://github.com/michaelosthege/pytensor-federated/workflows/test/badge.svg)](https://github.com/michaelosthege/pytensor-federated/actions)
22

3-
# `aesara-federated`
4-
This package implements federated computing with [Aesara](https://github.com/aesara-devs/aesara).
3+
# `pytensor-federated`
4+
This package implements federated computing with [PyTensor](https://github.com/pymc-devs/pytensor).
55

6-
Using `aesara-federated`, differentiable cost functions can be computed on federated nodes.
6+
Using `pytensor-federated`, differentiable cost functions can be computed on federated nodes.
77
Inputs and outputs are transmitted in binary via a bidirectional gRPC stream.
88

9-
A client side `LogpGradOp` is provided to conveniently embed federated compute operations in Aesara graphs such as a [PyMC](https://github.com/pymc-devs/pymc) model.
9+
A client side `LogpGradOp` is provided to conveniently embed federated compute operations in PyTensor graphs such as a [PyMC](https://github.com/pymc-devs/pymc) model.
1010

1111
The example code implements a simple Bayesian linear regression to data that is "private" to the federated compute process.
1212

@@ -21,7 +21,7 @@ python demo_model.py
2121
```
2222

2323
## Architecture
24-
`aesara-federated` is designed to be a very generalizable framework for federated computing with gRPC, but it comes with implementations for Aesara, and specifically for use cases of Bayesian inference.
24+
`pytensor-federated` is designed to be a very generalizable framework for federated computing with gRPC, but it comes with implementations for PyTensor, and specifically for use cases of Bayesian inference.
2525
This is reflected in the actual implementation, where the most basic gRPC service implementation -- the `ArraysToArraysService` -- is wrapped by a few implementation flavors, specifically for common use cases in Bayesian inference.
2626

2727
At the core, everything is built around an `ArraysToArrays` gRPC service, which takes any number of (NumPy) arrays as parameters, and returns any number of (NumPy) arrays as outputs.
@@ -54,10 +54,10 @@ Different sub-graphs of this example could be wrapped by an `ArraysToArraysServi
5454
If the entire model is differentiable, one can even return gradients.
5555
For example, with a linear model: `[slope, intercept] -> [LL, dLL_dslope, dLL_dintercept]`.
5656

57-
The role of Aesara here is purely technical:
58-
Aesara is a graph computation framework that implements auto-differentiation.
59-
Wrapping the `ArraysToArraysServiceClient` in Aesara `Op`s simply makes it easier to build more sophisticated compute graphs.
60-
Aesara is also the computatation backend for PyMC, which is the most popular framework for Bayesian inference in Python.
57+
The role of PyTensor here is purely technical:
58+
PyTensor is a graph computation framework that implements auto-differentiation.
59+
Wrapping the `ArraysToArraysServiceClient` in PyTensor `Op`s simply makes it easier to build more sophisticated compute graphs.
60+
PyTensor is also the computatation backend for PyMC, which is the most popular framework for Bayesian inference in Python.
6161

6262

6363
## Installation & Contributing

demo_model.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
import numpy as np
77
import pymc as pm
88

9-
from aesara_federated import AsyncLogpGradOp, LogpGradOp, LogpGradServiceClient
9+
from pytensor_federated import AsyncLogpGradOp, LogpGradOp, LogpGradServiceClient
1010

1111
_log = logging.getLogger(__file__)
1212
logging.basicConfig(level=logging.INFO)

demo_node.py

+4-4
Original file line numberDiff line numberDiff line change
@@ -5,12 +5,12 @@
55
import time
66
from typing import Sequence, Tuple
77

8-
import aesara
9-
import aesara.tensor as at
108
import grpclib
119
import numpy as np
10+
import pytensor
11+
import pytensor.tensor as at
1212

13-
from aesara_federated import ArraysToArraysService, wrap_logp_grad_func
13+
from pytensor_federated import ArraysToArraysService, wrap_logp_grad_func
1414

1515
_log = logging.getLogger(__file__)
1616
logging.basicConfig(level=logging.INFO)
@@ -36,7 +36,7 @@ def _make_function(x, y, sigma):
3636
pdf = 1 / (sigma * np.sqrt(2 * np.pi)) * at.exp(-0.5 * ((y - pred) / sigma) ** 2)
3737
logp = at.log(pdf).sum()
3838
grad = at.grad(logp, wrt=[intercept, slope])
39-
fn = aesara.function(
39+
fn = pytensor.function(
4040
inputs=[intercept, slope],
4141
outputs=[logp, *grad],
4242
)

environment.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -9,4 +9,4 @@ dependencies:
99
- pip
1010
- pip:
1111
- betterproto[compiler]==2.0.0b5
12-
- pymc>=4.1.6
12+
- pymc==5.0.0

protobufs/generate.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88

99
DP_HERE = pathlib.Path(__file__).parent.absolute()
1010
DP_ROOT = DP_HERE.parent
11-
DP_PROJ = DP_ROOT / "aesara_federated"
11+
DP_PROJ = DP_ROOT / "pytensor_federated"
1212
FP_PROTO = DP_HERE / "service.proto"
1313

1414

pyproject.toml

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,6 @@ line-length = 100
66

77
[tool.mypy]
88
exclude = [
9-
"^aesara_federated/test_*",
9+
"^pytensor_federated/test_*",
1010
]
1111
ignore_missing_imports = true

aesara_federated/__init__.py renamed to pytensor_federated/__init__.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -19,4 +19,4 @@
1919
from .service import ArraysToArraysService, ArraysToArraysServiceClient
2020
from .signatures import ComputeFunc, LogpFunc, LogpGradFunc
2121

22-
__version__ = "0.3.0"
22+
__version__ = "0.4.0"
File renamed without changes.

aesara_federated/op_async.py renamed to pytensor_federated/op_async.py

+9-9
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11
import asyncio
22
from typing import Any, Callable, List, Optional, Sequence
33

4-
import aesara.tensor as at
5-
from aesara.compile import optdb
6-
from aesara.compile.ops import FromFunctionOp
7-
from aesara.graph import FunctionGraph
8-
from aesara.graph.basic import Apply, Variable, is_in_ancestors
9-
from aesara.graph.features import ReplaceValidate
10-
from aesara.graph.op import Op, OutputStorageType, ParamsInputType
11-
from aesara.graph.rewriting.basic import GraphRewriter
4+
import pytensor.tensor as at
5+
from pytensor.compile import optdb
6+
from pytensor.compile.ops import FromFunctionOp
7+
from pytensor.graph import FunctionGraph
8+
from pytensor.graph.basic import Apply, Variable, is_in_ancestors
9+
from pytensor.graph.features import ReplaceValidate
10+
from pytensor.graph.op import Op, OutputStorageType, ParamsInputType
11+
from pytensor.graph.rewriting.basic import GraphRewriter
1212

1313
from .utils import get_useful_event_loop
1414

@@ -37,7 +37,7 @@ async def perform_async(
3737

3838

3939
class AsyncFromFunctionOp(AsyncOp, FromFunctionOp):
40-
"""Async version of the ``aesara.compile.ops.FromFunctionOp``.
40+
"""Async version of the ``pytensor.compile.ops.FromFunctionOp``.
4141
4242
Note that ``AsyncOp.perform`` overrides ``FromFunctionOp.perform`` by MRO.
4343
"""
File renamed without changes.
File renamed without changes.
File renamed without changes.

aesara_federated/test_npproto.py renamed to pytensor_federated/test_npproto.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,8 @@
33
import numpy
44
import pytest
55

6-
from aesara_federated import npproto
7-
from aesara_federated.npproto import utils
6+
from pytensor_federated import npproto
7+
from pytensor_federated.npproto import utils
88

99

1010
class TestUtils:

aesara_federated/test_op_async.py renamed to pytensor_federated/test_op_async.py

+13-13
Original file line numberDiff line numberDiff line change
@@ -2,15 +2,15 @@
22
import time
33
from typing import Sequence, Tuple
44

5-
import aesara
6-
import aesara.tensor as at
75
import numpy
6+
import pytensor
7+
import pytensor.tensor as at
88
import pytest
9-
from aesara.graph import FunctionGraph
10-
from aesara.graph.basic import Apply, Variable
11-
from aesara.graph.op import Op
9+
from pytensor.graph import FunctionGraph
10+
from pytensor.graph.basic import Apply, Variable
11+
from pytensor.graph.op import Op
1212

13-
from aesara_federated import op_async
13+
from pytensor_federated import op_async
1414

1515

1616
class _AsyncDelay(op_async.AsyncOp):
@@ -41,7 +41,7 @@ def test_perform(self):
4141
d = at.scalar()
4242
out = delay_op(d)
4343
# Compile a function to exclude compile time from delay measurement
44-
f = aesara.function([d], [out])
44+
f = pytensor.function([d], [out])
4545
ts = time.perf_counter()
4646
f(0.5)
4747
assert 0.5 < time.perf_counter() - ts < 0.6
@@ -60,12 +60,12 @@ async def _fn(x):
6060
otypes=[at.dscalar],
6161
)
6262
assert isinstance(affo, op_async.AsyncOp)
63-
assert isinstance(affo, aesara.compile.ops.FromFunctionOp)
63+
assert isinstance(affo, pytensor.compile.ops.FromFunctionOp)
6464

6565
d = at.scalar()
6666
out = affo(d)
6767
# Compile a function to exclude compile time from delay measurement
68-
f = aesara.function([d], [out])
68+
f = pytensor.function([d], [out])
6969
ts = time.perf_counter()
7070
f(0.5)
7171
assert 0.5 < time.perf_counter() - ts < 0.6
@@ -97,7 +97,7 @@ def test_perform(self):
9797

9898
# Evaluating the delays in parallel is faster than the sum of delays.
9999
# We do this with a compiled function to exclude compile time from delay measurement.
100-
f = aesara.function([d1, d2], [dsum])
100+
f = pytensor.function([d1, d2], [dsum])
101101
t_start = time.perf_counter()
102102
delay_sum = f(0.5, 0.2)[0]
103103
t_took = time.perf_counter() - t_start
@@ -152,10 +152,10 @@ def test_parallelize_async_applies():
152152

153153
def _measure_fg(fg: FunctionGraph, *inputs) -> Tuple[Sequence[numpy.ndarray], float]:
154154
"""Measure the runtime of a function compiled from `fg`."""
155-
f = aesara.function(
155+
f = pytensor.function(
156156
fg.inputs,
157157
fg.outputs,
158-
mode=aesara.compile.FAST_COMPILE, # skip the async fusion
158+
mode=pytensor.compile.FAST_COMPILE, # skip the async fusion
159159
)
160160
t0 = time.perf_counter()
161161
outputs = f(*inputs)
@@ -199,7 +199,7 @@ def test_fuse_asyncs_by_default():
199199
delay = _AsyncDelay()
200200
a, b = at.scalars("ab")
201201
c = delay(a) + delay(b)
202-
f = aesara.function([a, b], [c])
202+
f = pytensor.function([a, b], [c])
203203
t0 = time.perf_counter()
204204
f(0.25, 0.25)
205205
assert time.perf_counter() - t0 < 0.3

aesara_federated/test_service.py renamed to pytensor_federated/test_service.py

+4-4
Original file line numberDiff line numberDiff line change
@@ -8,10 +8,10 @@
88
import numpy as np
99
import pytest
1010

11-
from aesara_federated import service, signatures
12-
from aesara_federated.npproto.utils import ndarray_from_numpy, ndarray_to_numpy
13-
from aesara_federated.rpc import GetLoadResult, InputArrays, OutputArrays
14-
from aesara_federated.utils import get_useful_event_loop
11+
from pytensor_federated import service, signatures
12+
from pytensor_federated.npproto.utils import ndarray_from_numpy, ndarray_to_numpy
13+
from pytensor_federated.rpc import GetLoadResult, InputArrays, OutputArrays
14+
from pytensor_federated.utils import get_useful_event_loop
1515

1616

1717
def test_compute_function():

aesara_federated/test_utils.py renamed to pytensor_federated/test_utils.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
import asyncio
22

3-
from aesara_federated import utils
4-
from aesara_federated.rpc import GetLoadResult
3+
from pytensor_federated import utils
4+
from pytensor_federated.rpc import GetLoadResult
55

66

77
def test_argmin_load():

aesara_federated/test_wrapper_ops.py renamed to pytensor_federated/test_wrapper_ops.py

+7-7
Original file line numberDiff line numberDiff line change
@@ -4,19 +4,19 @@
44
import sys
55
import time
66

7-
import aesara
8-
import aesara.tensor as at
97
import arviz
108
import grpclib
119
import numpy as np
1210
import pymc as pm
11+
import pytensor
12+
import pytensor.tensor as at
1313
import pytest
1414
import scipy
15-
from aesara.compile.ops import FromFunctionOp
16-
from aesara.graph.basic import Apply, Variable
15+
from pytensor.compile.ops import FromFunctionOp
16+
from pytensor.graph.basic import Apply, Variable
1717

18-
from aesara_federated import common, op_async, service, wrapper_ops
19-
from aesara_federated.utils import get_useful_event_loop
18+
from pytensor_federated import common, op_async, service, wrapper_ops
19+
from pytensor_federated.utils import get_useful_event_loop
2020

2121

2222
class _MockLogpGradOpClient:
@@ -228,7 +228,7 @@ def test_grad(self):
228228
b = at.scalar()
229229
logp, *_ = flop(a, b)
230230
ga, gb = at.grad(logp, [a, b])
231-
fn = aesara.function(inputs=[a, b], outputs=[logp, ga, gb])
231+
fn = pytensor.function(inputs=[a, b], outputs=[logp, ga, gb])
232232
exlogp, (exda, exdb) = dummy_quadratic_model(1.4, 0.5)
233233
actual = fn(1.4, 0.5)
234234
np.testing.assert_array_equal(actual[0], exlogp)
File renamed without changes.

aesara_federated/wrapper_ops.py renamed to pytensor_federated/wrapper_ops.py

+7-7
Original file line numberDiff line numberDiff line change
@@ -1,18 +1,18 @@
11
from typing import Any, Callable, List, Optional, Sequence, Union
22

3-
import aesara
4-
import aesara.tensor as at
53
import numpy as np
6-
from aesara.compile.ops import FromFunctionOp
7-
from aesara.graph.basic import Apply, Variable
8-
from aesara.graph.op import Op, OutputStorageType, ParamsInputType
4+
import pytensor
5+
import pytensor.tensor as at
6+
from pytensor.compile.ops import FromFunctionOp
7+
from pytensor.graph.basic import Apply, Variable
8+
from pytensor.graph.op import Op, OutputStorageType, ParamsInputType
99

1010
from .op_async import AsyncFromFunctionOp, AsyncOp
1111
from .signatures import ComputeFunc, LogpFunc, LogpGradFunc
1212

1313

1414
class ArraysToArraysOp(FromFunctionOp):
15-
"""Alias for the `aesara.compile.ops.FromFunctionOp`.
15+
"""Alias for the `pytensor.compile.ops.FromFunctionOp`.
1616
1717
This alias exists for more convenient imports,
1818
more informative type hints,
@@ -124,7 +124,7 @@ def grad(self, inputs: Sequence[Variable], output_grads: List[Variable]) -> List
124124
# one w.r.t. logp
125125
g_logp, *gs_inputs = output_grads
126126
for i, g in enumerate(gs_inputs):
127-
if not isinstance(g.type, aesara.gradient.DisconnectedType):
127+
if not isinstance(g.type, pytensor.gradient.DisconnectedType):
128128
raise ValueError(f"Can't propagate gradients wrt parameter {i+1}")
129129
# Call again on the original inputs, to obtain a handle
130130
# on the gradient. The computation will not actually be

setup.py

+3-3
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
import setuptools
66

7-
__packagename__ = "aesara_federated"
7+
__packagename__ = "pytensor_federated"
88
ROOT = pathlib.Path(__file__).parent
99

1010

@@ -38,8 +38,8 @@ def get_version():
3838
packages=setuptools.find_packages(),
3939
version=__version__,
4040
description="This package helps to reduce the amount of boilerplate code when creating Airflow DAGs from Python callables.",
41-
url="https://github.com/michaelosthege/aesara-federated",
42-
download_url=f"https://github.com/michaelosthege/aesara-federated/tarball/{__version__}",
41+
url="https://github.com/michaelosthege/pytensor-federated",
42+
download_url=f"https://github.com/michaelosthege/pytensor-federated/tarball/{__version__}",
4343
author="Michael Osthege",
4444
author_email="[email protected]",
4545
license="GNU Affero General Public License v3",

0 commit comments

Comments
 (0)