-
-
Notifications
You must be signed in to change notification settings - Fork 913
Implement support for result streaming [3.15] #1117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
If you have a good grasp of the protocol, why not propose an extension values allowed by xref? And we can add that in a future version. Maybe also release xref as a "core package". So far I'm not sure which shape it should take. An iterator? A promise? A chunked iterator? |
Does anybody know a server implementing that part of the spec? @alanz @robertoaloi @ericdallo @sebastiansturm @kiennq ? |
There's no support currently for dart LSP server |
Hi @yyoncho and sorry for the late reply. Erlang LS does not yet support streaming of results. It only has support for progress reports at this stage. |
Thank you, I will ask in language server protocol repo. Hopefully, it has been implemented somewhere... |
@yyoncho I'm planning to implement the partial results feature in the language server for Kotlin I develop. For now I'm working on code completion, but other features would support streaming as well. |
@suive how is your language server different from https://github.com/fwcd/kotlin-language-server? In any case, I can offer you my thoughts on implementing this on
Note that these are only suggestions, and there might be a better design I haven't though of. |
@suive thank you. For completion, we should coordinate with @dgutov. AFAICT most of the work/challenges will be on company-mode side. It seems to me that we won't be able to fit streaming it in capf without going to emacs-devel and waiting for new emacs release. This means that the practical solution will be to implement it as a traditional company backend and then push it in capf when appropriate. But it all depends on @dgutov's view. cc @kiennq @nbfalcon sounds good. We might create integration tests for streaming results. |
I really wish people visited emacs-devel more, not sure what else could restart the process of getting asynchronous completion support in capf. And streaming, too. All ending up in a new-and-rewritten version of capf. But starting on it in company should be fine too. We can consider it incubating new API features, as well as experiment with how the change should look in the UI. BTW, is streaming something that gives a measurement improvement for the user experience? Reduced latency till the first popup display, less memory consumption on the backend, all of that? I just figured if you need to do both fuzzy matching and the appropriate sorting of matches, you need to do the full search for them on the backend anyway, and then you might only be able to save on serialization and bandwidth. |
I think at least it will reduce the latency until the first popup.
Also, how |
It can, but I think the question deserves some benchmarking. And also, maybe a comparison against YouCompleteMe's original approach, which was simply to limit the number of returned completions to 1000 (they did not cache on the client, obviously).
If you can [do the sorting properly on the client]. Of course, the user shouldn't notice that happening until they type some new char, or maybe press
I think it should be a "pull" interface (like, there is a container, it reports that the current set of items is incomplete, and you can call it in a special way to get more items). Alternatively, it can be an extension of the current "async" interface where you will be able to pass the new arriving items to the callback multiple times (the callback's sig will change from |
I don't see why we would need result streaming for completion at all. For things like |
Just FYI, there was a time Dart LSP server was returning thousands of completion items as you can see in this table, there are other languages info too, wouldn't this streaming help with that? |
We'd just get thousands of result across multiple json responses. Would that really be helpful? We'd be spending the same time parsing (perhaps even more). |
Well, this unfortunately is not the case for Kotlin, where a more comprehensive analysis is needed to get extension properties (those can be scattered anywhere in the project and dependencies). We can let user get fast-to-compute local completion results, and then asynchronously send the other (more expensive to compute) part. That's how they do it in Intellij IDEA, I believe. |
@suive How do you deal with fuzzy sorting them, then? |
@dgutov In my opinion it will be a good idea to lift this restriction, if there aren't many technical difficulties:
In my experience there is nothing wrong with a completion hover window updating without user interaction. I think ability to get (partial) results faster is more important in this case. In our case we either rely on the client-side sorting (re-evaluating it after each chunk), or treat each update as a full replacement, this way moving sorting to the backend (bandwidth is not even an issue, in my opinion). But the second solution is not the way it is intended in the protocol, I believe. |
Hmm, suppose I should see it first. VS Code does that as well, I imagine? Whether we can do this in company also depends on the rendering methods available to us (overlays and child frame based popups). Like, if doing that will introduce extra glitches/blinking, then I'd wonder whether that is worth it. |
Posframe solutions won't blink, right? |
We should test that with some prototype. One reason I'm not using a posframe based frontend yet personally, is they're generally slower (on a GTK3 Emacs build) and do show some artefacts from time to time. |
The upcoming https://microsoft.github.io/language-server-protocol/specifications/specification-3-15/ version of the protocol will support streaming results(e. g. find-references, completion, find definitions, etc). We will have to find a way to represent that data in lsp-mode.
A possible solution would be to Implement streaming support on top of helm/ivy and enable it only when the user uses them. When they are not present - fallback to standard xref and do not declare streaming support.
cc @dgutov as the maintainer of company and xref(?).
The text was updated successfully, but these errors were encountered: