You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add a small LRU cache in front of calls to normalize
We are almost always normalizing the same strings over and over again. Think
about when iterating over mappings: we are reconstructing each mapping's source,
but the mappings are in sorted order, and so will likely have the same source
every time.
This makes us take .09x the time that we used to take on the
"iterate.already.parsed" benchmark!!
Without the LRU cache:
Samples Total (ms) Mean (ms) Standard Deviation (ms)
50 257604.64 5152.09 221.19
With the new LRU cache:
Samples Total (ms) Mean (ms) Standard Deviation (ms)
50 23301.74 466.03 56.14
0 commit comments