Make parsing even faster, and use less memory #108
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Further refinement to #107.
cpu+mem
profiler in emacs still showed a global scope completion with javascript-typescript-langserver chewing through 119MB and GC was taking up 28% of the runtime. Profiling still pointed towardslsp--parser-read
. I suspected constantly concat'ing chunks was making a ton of extra allocs; eg we'd get a 4k chunk initially, then we'd alloc a new 8k chunk when (to hold leftover 4k + new chunk), then 12k, 16, ... compounding until we reached the full megabyte response.This change preallocates a single big string of the total Content-Length size, then fills it as we go. This cuts memory usage to around 29M and GC cpu time to ~23%.
Note: trusting the server to not set a malicious Content-Length...
As an aside,
json-read-from-string
is now the the single biggest CPU bottleneck, followed by GC and then various internal emacs completion stuff. So I guess we're good for now?Number are from running a test script with
emacs -q --script
. Profiles are in the zip (open withprofiler-find-profile
)profiles.zip