Skip to content

Implement support for result streaming [3.15] #1117

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
yyoncho opened this issue Oct 19, 2019 · 20 comments
Open

Implement support for result streaming [3.15] #1117

yyoncho opened this issue Oct 19, 2019 · 20 comments

Comments

@yyoncho
Copy link
Member

yyoncho commented Oct 19, 2019

The upcoming https://microsoft.github.io/language-server-protocol/specifications/specification-3-15/ version of the protocol will support streaming results(e. g. find-references, completion, find definitions, etc). We will have to find a way to represent that data in lsp-mode.

A possible solution would be to Implement streaming support on top of helm/ivy and enable it only when the user uses them. When they are not present - fallback to standard xref and do not declare streaming support.

cc @dgutov as the maintainer of company and xref(?).

@dgutov
Copy link
Contributor

dgutov commented Oct 19, 2019

If you have a good grasp of the protocol, why not propose an extension values allowed by xref? And we can add that in a future version. Maybe also release xref as a "core package".

So far I'm not sure which shape it should take. An iterator? A promise? A chunked iterator?

@yyoncho
Copy link
Member Author

yyoncho commented May 24, 2020

Does anybody know a server implementing that part of the spec? @alanz @robertoaloi @ericdallo @sebastiansturm @kiennq ?

@ericdallo
Copy link
Member

There's no support currently for dart LSP server

@robertoaloi
Copy link
Contributor

Hi @yyoncho and sorry for the late reply. Erlang LS does not yet support streaming of results. It only has support for progress reports at this stage.

@yyoncho
Copy link
Member Author

yyoncho commented Jun 5, 2020

Thank you, I will ask in language server protocol repo. Hopefully, it has been implemented somewhere...

@shoobinger
Copy link

shoobinger commented Jan 12, 2021

@yyoncho I'm planning to implement the partial results feature in the language server for Kotlin I develop. For now I'm working on code completion, but other features would support streaming as well.
The language server is in an early WIP state now, but it's usable, and I guess it can be used to develop and test the feature client-side as soon as it is ready on my part.
Also I will be glad to help with the client side changes

@nbfalcon
Copy link
Member

nbfalcon commented Jan 12, 2021

@suive how is your language server different from https://github.com/fwcd/kotlin-language-server? In any case, I can offer you my thoughts on implementing this on lsp-mode's side:

  • lsp-request-async gets a new parameter, :partial-result-mode. It can be either t, in which case partial results are passed to the callback trough multiple invocations, 'collect, in which case the callback is invoked multiple times but the results are concatenated each time (useful for ivy, because there is only ivy-update-candidates which takes a list of candidates as the new), or nil (default), in which case partial results aren't supported. With the 'collect mode, only few changes would be necessary.
  • lsp-request: makes no sense for sync requests
  • lsp-request: hybrid mode (different function): there could be another function like lsp-request which awaits only the first batch of results, and then calls a callback from then on. A use case is waiting for some ivy candidates to not show a window prematurely and to allow C-g, and then to update the candidates window from then on.

Note that these are only suggestions, and there might be a better design I haven't though of.

@yyoncho
Copy link
Member Author

yyoncho commented Jan 12, 2021

@suive thank you.

For completion, we should coordinate with @dgutov. AFAICT most of the work/challenges will be on company-mode side. It seems to me that we won't be able to fit streaming it in capf without going to emacs-devel and waiting for new emacs release. This means that the practical solution will be to implement it as a traditional company backend and then push it in capf when appropriate. But it all depends on @dgutov's view.

cc @kiennq

@nbfalcon sounds good. We might create integration tests for streaming results.

@dgutov
Copy link
Contributor

dgutov commented Jan 14, 2021

I really wish people visited emacs-devel more, not sure what else could restart the process of getting asynchronous completion support in capf. And streaming, too. All ending up in a new-and-rewritten version of capf.

But starting on it in company should be fine too. We can consider it incubating new API features, as well as experiment with how the change should look in the UI.

BTW, is streaming something that gives a measurement improvement for the user experience? Reduced latency till the first popup display, less memory consumption on the backend, all of that?

I just figured if you need to do both fuzzy matching and the appropriate sorting of matches, you need to do the full search for them on the backend anyway, and then you might only be able to save on serialization and bandwidth.

@kiennq
Copy link
Member

kiennq commented Jan 14, 2021

I think at least it will reduce the latency until the first popup.
The part of fuzzy matching and sorting is interesting. Mainly how should we approach that:

  • New items should fuzzy matched and push to the end?
    • Pros: No item jumping around in popup and less user confusion.
    • Cons: The order is not correct and depends on how language server sends chunk
  • New items should fuzzy matched, sort in correct place?
    • Pros: correct order
    • Cons: The item can be jumping around that may cause confusion. We can keep the selection fixed on selected item though.

Also, how company-mode going to deal with this streaming result? Should company-mode provide special popup updating notification that capf should call to let company-mode knows there's new items available?

@dgutov
Copy link
Contributor

dgutov commented Jan 14, 2021

I think at least it will reduce the latency until the first popup.

It can, but I think the question deserves some benchmarking. And also, maybe a comparison against YouCompleteMe's original approach, which was simply to limit the number of returned completions to 1000 (they did not cache on the client, obviously).

New items should fuzzy matched, sort in correct place?

If you can [do the sorting properly on the client]. Of course, the user shouldn't notice that happening until they type some new char, or maybe press <down> or M-n (this part will require explicit new support in company). The popup shouldn't blink or shuffle items around while the user is not interacting with it.

Should company-mode provide special popup updating notification that capf should call to let company-mode knows there's new items available?

I think it should be a "pull" interface (like, there is a container, it reports that the current set of items is incomplete, and you can call it in a special way to get more items). Alternatively, it can be an extension of the current "async" interface where you will be able to pass the new arriving items to the callback multiple times (the callback's sig will change from (lambda (result)) to (lambda (result &optional more-to-come))). The former might be closer to how the capf author will like it, but the latter could be a little easier to implement (it will probably get translated to some kind of "pull" interface internally, though).

@nbfalcon
Copy link
Member

I don't see why we would need result streaming for completion at all. For things like lsp-ivy-workspace-symbol it makes sense, because there could be thousands of candidates, and some symbols might be in indexes not yet loaded from disk. But auto-completion is only based on parsing most of the time, so all information is available at once to the LS, not lazily loaded. Disclaimer: I am not an LS developer, so this might be nonsense.

@ericdallo
Copy link
Member

Just FYI, there was a time Dart LSP server was returning thousands of completion items as you can see in this table, there are other languages info too, wouldn't this streaming help with that?

@nbfalcon
Copy link
Member

We'd just get thousands of result across multiple json responses. Would that really be helpful? We'd be spending the same time parsing (perhaps even more).

@shoobinger
Copy link

completion is only based on parsing most of the time

Well, this unfortunately is not the case for Kotlin, where a more comprehensive analysis is needed to get extension properties (those can be scattered anywhere in the project and dependencies). We can let user get fast-to-compute local completion results, and then asynchronously send the other (more expensive to compute) part. That's how they do it in Intellij IDEA, I believe.

@dgutov
Copy link
Contributor

dgutov commented Jan 14, 2021

@suive How do you deal with fuzzy sorting them, then?

@shoobinger
Copy link

shoobinger commented Jan 14, 2021

@dgutov In my opinion it will be a good idea to lift this restriction, if there aren't many technical difficulties:

Of course, the user shouldn't notice that happening until they type some new char, or maybe press or M-n (this part will require explicit new support in company). The popup shouldn't blink or shuffle items around while the user is not interacting with it.

In my experience there is nothing wrong with a completion hover window updating without user interaction. I think ability to get (partial) results faster is more important in this case.
Again continuing comparison with IDEA, they manage to update the window in the background, so you see new items appearing in the window, not necessarily in the end of the list. The selection is kept on the item selected by the user.

In our case we either rely on the client-side sorting (re-evaluating it after each chunk), or treat each update as a full replacement, this way moving sorting to the backend (bandwidth is not even an issue, in my opinion). But the second solution is not the way it is intended in the protocol, I believe.

@dgutov
Copy link
Contributor

dgutov commented Jan 14, 2021

Again continuing comparison with IDEA, they manage to update the window in the background, so you see new items appearing in the window, not necessarily in the end of the list. The selection is kept on the item selected by the user.

Hmm, suppose I should see it first. VS Code does that as well, I imagine?

Whether we can do this in company also depends on the rendering methods available to us (overlays and child frame based popups). Like, if doing that will introduce extra glitches/blinking, then I'd wonder whether that is worth it.

@yyoncho
Copy link
Member Author

yyoncho commented Jan 14, 2021

Like, if doing that will introduce extra glitches/blinking, then I'd wonder whether that is worth it.

Posframe solutions won't blink, right?

@dgutov
Copy link
Contributor

dgutov commented Jan 14, 2021

We should test that with some prototype.

One reason I'm not using a posframe based frontend yet personally, is they're generally slower (on a GTK3 Emacs build) and do show some artefacts from time to time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants