Skip to content

Docs: Add a recipe for robust runtime introspection #225

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jun 8, 2023

Conversation

AlexWaygood
Copy link
Member

Followup to #203

Copy link
Collaborator

@srittau srittau left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

get_typing_objects_by_name_of() could actually be useful for typing_extensions itself, even if we normally only add typing backports.

Also, did you profile that the cache actually has a positive performance benefit?

@AlexWaygood
Copy link
Member Author

AlexWaygood commented Jun 8, 2023

get_typing_objects_by_name_of() could actually be useful for typing_extensions itself

Well, I thought so too (see #203), but it was open for nearly two weeks and there didn't seem to be much interest in it from people who I imagined might be users of the function. @JelleZijlstra and @adriangb suggested we add a recipe to the docs instead :)

Also, did you profile that the cache actually has a positive performance benefit?

I haven't actually, no. That's a good point; I'll do that. Though @JelleZijlstra mentioned in #203 (comment) that having a cache on a function very much like this had been crucial to keeping pyanalyze performant.

@srittau
Copy link
Collaborator

srittau commented Jun 8, 2023

If Jelle said that, that's good enough for me.

@AlexWaygood
Copy link
Member Author

AlexWaygood commented Jun 8, 2023

I did some benchmarking anyway, using this script (I called it utils.py and put it in typing_extensions/src/):

Benchmark script
import functools
import typing
import typing_extensions
from typing import Tuple, Any
from typing_extensions import get_origin

# Use an unbounded cache for this function, for optimal performance
# @functools.lru_cache(maxsize=None)
def get_typing_objects_by_name_of(name: str) -> Tuple[Any, ...]:
   result = tuple(
       getattr(module, name)
       # You could potentially also include mypy_extensions here,
       # if your library supports mypy_extensions
       for module in (typing, typing_extensions)
       if hasattr(module, name)
   )
   if not result:
       raise ValueError(
           f"Neither typing nor typing_extensions has an object called {name!r}"
       )
   return result


def is_typing_name(obj: object, name: str) -> bool:
   return any(obj is thing for thing in get_typing_objects_by_name_of(name))


is_literal = functools.partial(is_typing_name, name="Literal")


def _bench():
   is_literal(typing.Literal)
   is_literal(typing_extensions.Literal)
   is_literal(typing.Any)
   is_literal(get_origin(typing.Literal[42]))
   is_literal(get_origin(typing_extensions.Final[42]))

I ran these commands from the typing_extensions/src/ directory:

>python -m timeit -s "from utils import _bench" "_bench()"
50000 loops, best of 5: 4.51 usec per loop

With the @lru_cache(maxsize=None) line commented out:

>python -m timeit -s "from utils import _bench" "_bench()"
50000 loops, best of 5: 7.66 usec per loop

So, the cache does indeed seem to speed things up quite a lot!

@AlexWaygood
Copy link
Member Author

Adding the second cache speeds it up further, to 1.99 usec per loop. I think that's reasonable, though I think it makes sense to make that one a bounded cache rather than an unbounded one.

@srittau srittau merged commit 4773f27 into python:main Jun 8, 2023
@AlexWaygood AlexWaygood deleted the introspection-recipes branch June 8, 2023 11:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants