-
Notifications
You must be signed in to change notification settings - Fork 129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Modify np.tri Op to use _iota instead #1276
base: main
Are you sure you want to change the base?
Conversation
The circular import is because you are importing The solution is to move the |
Those Ops and a few more (arange, alloc, ...) should probably be in a |
@jessegrabowski My bad for missing it, I'll make the change. |
That's up to you. It would be much appreciated! |
@jessegrabowski Made the change. I have left the docstring of |
@jessegrabowski The tests were passing for |
|
@jessegrabowski I made the change, the tests are passing now. I have removed 'complex64' |
Well what was the error? I don't want us removing tests. |
@jessegrabowski |
Put the test back and i'll trigger a CI run so I can see the full output |
@jessegrabowski Also stuck at this test case; seems to be failing because we are passing concrete values instead of symbolic ones (I think). The same is tested in the test case below it. |
@@ -1142,10 +1176,19 @@ def tri(N, M=None, k=0, dtype=None): | |||
""" | |||
if dtype is None: | |||
dtype = config.floatX | |||
dtype = np.dtype(dtype) | |||
|
|||
if M is None: | |||
M = N |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
M = N | |
M = N.copy() |
OpFromGraph
can freak out sometimes if a single input is re-used
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jessegrabowski Makes sense, I'll make the change. But it still resulted in the same error.
Stack trace (formatting may be unclear)
tests\link\jax\test_tensor_basic.py F [100%]
====================================================== FAILURES =======================================================
______________________________________________________ test_tri _______________________________________________________
def streamline_default_f():
for x in no_recycling:
x[0] = None
try:
# strict=False because we are in a hot loop
for thunk, node, old_storage in zip(
thunks, order, post_thunk_old_storage, strict=False
):
thunk()
pytensor\link\utils.py:197:
pytensor\graph\op.py:531: in rval
r = p(n, [x[0] for x in i], o)
pytensor\compile\builders.py:875: in perform
variables = self.fn(*inputs)
pytensor\compile\builders.py:856: in fn
self.fn = function(self.inner_inputs, self.inner_outputs, **self.kwargs)
pytensor\compile\function_init.py:332: in function
fn = pfunc(
pytensor\compile\function\pfunc.py:472: in pfunc
return orig_function(
pytensor\compile\function\types.py:1820: in orig_function
m = Maker(
pytensor\compile\function\types.py:1567: in init
self.check_unused_inputs(inputs, outputs, on_unused_input)
inputs = [In(*0-<Scalar(int8, shape=())>), In(*1-<Scalar(int8, shape=())>), In(*2-<Scalar(int8, shape=())>)]
outputs = [Out(Cast{float64}.0,False)], on_unused_input = 'raise'
@staticmethod
def check_unused_inputs(inputs, outputs, on_unused_input):
if on_unused_input is None:
on_unused_input = config.on_unused_input
if on_unused_input == "ignore":
return
# There should be two categories of variables in inputs:
# - variables that have to be provided (used_inputs)
# - shared variables that will be updated
used_inputs = list(
ancestors(
(
[o.variable for o in outputs]
+ [
i.update
for i in inputs
if getattr(i, "update", None) is not None
]
),
blockers=[i.variable for i in inputs],
)
)
msg = (
"pytensor.function was asked to create a function computing "
"outputs given certain inputs, but the provided input "
"variable at index %i is not part of the computational graph "
"needed to compute the outputs: %s.\n%s"
)
warn_msg = (
"To make this warning into an error, you can pass the "
"parameter on_unused_input='raise' to pytensor.function. "
"To disable it completely, use on_unused_input='ignore'."
)
err_msg = (
"To make this error into a warning, you can pass the "
"parameter on_unused_input='warn' to pytensor.function. "
"To disable it completely, use on_unused_input='ignore'."
)
for i in inputs:
if (i.variable not in used_inputs) and (i.update is None):
if on_unused_input == "warn":
warnings.warn(
msg % (inputs.index(i), i.variable, warn_msg), stacklevel=6
)
elif on_unused_input == "raise":
raise UnusedInputError(msg % (inputs.index(i), i.variable, err_msg))
E pytensor.compile.function.types.UnusedInputError: pytensor.function was asked to create a function computing outputs given certain inputs, but the provided input variable at index 0 is not part of the computational graph needed to compute the outputs: *0-<Scalar(int8, shape=())>.
E To make this error into a warning, you can pass the parameter on_unused_input='warn' to pytensor.function. To disable it completely, use on_unused_input='ignore'.
pytensor\compile\function\types.py:1438: UnusedInputError
During handling of the above exception, another exception occurred:
def test_tri():
out = ptb.tri(10, 10, 0)
compare_jax_and_py([], [out], [])
tests\link\jax\test_tensor_basic.py:207:
tests\link\jax\test_basic.py:87: in compare_jax_and_py
py_res = pytensor_py_fn(*test_inputs)
pytensor\compile\function\types.py:1037: in call
outputs = vm() if output_subset is None else vm(output_subset=output_subset)
pytensor\link\utils.py:201: in streamline_default_f
raise_with_op(fgraph, node, thunk)
pytensor\link\utils.py:526: in raise_with_op
raise exc_value.with_traceback(exc_trace)
pytensor\link\utils.py:197: in streamline_default_f
thunk()
pytensor\graph\op.py:531: in rval
r = p(n, [x[0] for x in i], o)
pytensor\compile\builders.py:875: in perform
variables = self.fn(*inputs)
pytensor\compile\builders.py:856: in fn
self.fn = function(self.inner_inputs, self.inner_outputs, **self.kwargs)
pytensor\compile\function_init.py:332: in function
fn = pfunc(
pytensor\compile\function\pfunc.py:472: in pfunc
return orig_function(
pytensor\compile\function\types.py:1820: in orig_function
m = Maker(
pytensor\compile\function\types.py:1567: in init
self.check_unused_inputs(inputs, outputs, on_unused_input)
inputs = [In(*0-<Scalar(int8, shape=())>), In(*1-<Scalar(int8, shape=())>), In(*2-<Scalar(int8, shape=())>)]
outputs = [Out(Cast{float64}.0,False)], on_unused_input = 'raise'
@staticmethod
def check_unused_inputs(inputs, outputs, on_unused_input):
if on_unused_input is None:
on_unused_input = config.on_unused_input
if on_unused_input == "ignore":
return
# There should be two categories of variables in inputs:
# - variables that have to be provided (used_inputs)
# - shared variables that will be updated
used_inputs = list(
ancestors(
(
[o.variable for o in outputs]
+ [
i.update
for i in inputs
if getattr(i, "update", None) is not None
]
),
blockers=[i.variable for i in inputs],
)
)
msg = (
"pytensor.function was asked to create a function computing "
"outputs given certain inputs, but the provided input "
"variable at index %i is not part of the computational graph "
"needed to compute the outputs: %s.\n%s"
)
warn_msg = (
"To make this warning into an error, you can pass the "
"parameter on_unused_input='raise' to pytensor.function. "
"To disable it completely, use on_unused_input='ignore'."
)
err_msg = (
"To make this error into a warning, you can pass the "
"parameter on_unused_input='warn' to pytensor.function. "
"To disable it completely, use on_unused_input='ignore'."
)
for i in inputs:
if (i.variable not in used_inputs) and (i.update is None):
if on_unused_input == "warn":
warnings.warn(
msg % (inputs.index(i), i.variable, warn_msg), stacklevel=6
)
elif on_unused_input == "raise":
raise UnusedInputError(msg % (inputs.index(i), i.variable, err_msg))
E pytensor.compile.function.types.UnusedInputError: pytensor.function was asked to create a function computing outputs given certain inputs, but the provided input variable at index 0 is not part of the computational graph needed to compute the outputs: *0-<Scalar(int8, shape=())>.
E To make this error into a warning, you can pass the parameter on_unused_input='warn' to pytensor.function. To disable it completely, use on_unused_input='ignore'.
E Apply node that caused the error: Tri{inline=False}(10, 10, 0)
E Toposort index: 0
E Inputs types: [TensorType(int8, shape=()), TensorType(int8, shape=()), TensorType(int8, shape=())]
E Inputs shapes: [(), (), ()]
E Inputs strides: [(), (), ()]
E Inputs values: [array(10, dtype=int8), array(10, dtype=int8), array(0, dtype=int8)]
E Outputs clients: [[output0]]
E
E Backtrace when the node is created (use PyTensor flag traceback__limit=N to make it longer):
E File "C:\Users\Public\miniforge3\envs\pytensor-dev\Lib\site-packages\pluggy_callers.py", line 103, in _multicall
E res = hook_impl.function(*args)
E File "C:\Users\Public\miniforge3\envs\pytensor-dev\Lib\site-packages_pytest\runner.py", line 174, in pytest_runtest_call
E item.runtest()
E File "C:\Users\Public\miniforge3\envs\pytensor-dev\Lib\site-packages_pytest\python.py", line 1627, in runtest
E self.ihook.pytest_pyfunc_call(pyfuncitem=self)
E File "C:\Users\Public\miniforge3\envs\pytensor-dev\Lib\site-packages\pluggy_hooks.py", line 513, in call
E return self._hookexec(self.name, self._hookimpls.copy(), kwargs, firstresult)
E File "C:\Users\Public\miniforge3\envs\pytensor-dev\Lib\site-packages\pluggy_manager.py", line 120, in _hookexec
E return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
E File "C:\Users\Public\miniforge3\envs\pytensor-dev\Lib\site-packages\pluggy_callers.py", line 103, in _multicall
E res = hook_impl.function(*args)
E File "C:\Users\Public\miniforge3\envs\pytensor-dev\Lib\site-packages_pytest\python.py", line 159, in pytest_pyfunc_call
E result = testfunction(**testargs)
E File "C:\Users\Nimish Purohit\pytensor\tests\link\jax\test_tensor_basic.py", line 205, in test_tri
E out = ptb.tri(10, 10, 0)
E
E HINT: Use the PyTensor flag exception_verbosity=high
for a debug print-out and storage map footprint of this Apply node.
pytensor\compile\function\types.py:1438: UnusedInputError
================================================ slowest 50 durations =================================================
0.22s call tests/link/jax/test_tensor_basic.py::test_tri
(2 durations < 0.005s hidden. Use -vv to show these durations.)
=============================================== short test summary info ===============================================
FAILED tests/link/jax/test_tensor_basic.py::test_tri - pytensor.compile.function.types.UnusedInputError: pytensor.function was asked to create a function computing output...
================================================== 1 failed in 5.51s ==================================================
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll have a look more closely at what is going on over the weekend. It's not obvious to me at first glance
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jessegrabowski Any idea on how to proceed? The test function call will result in this call, which is causing the UnusedInputError. This call will contain an empty graph_inputs
, compared to the normal python test cases that are passing.
Description
Related Issue
Tri
op with anOpFromGraph
#1265Checklist
Type of change
📚 Documentation preview 📚: https://pytensor--1276.org.readthedocs.build/en/1276/