-
Notifications
You must be signed in to change notification settings - Fork 418
bug: fetch gets stuck very freqently #1568
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Can not reproduce. I have 78 plugins and git fetch never fails. Is this only related to |
No, happens randomly for any plugin or a number of them. |
Hm... If you disable luarocks support, the issue may be gone. |
I think I already disabled it. My config looks like this: require("lazy").setup({
import = "plugin_specs", -- see plugin specs in lua/plugin_specs
},
-- lazy.nvim configuration
{
lockfile = vim.fn.stdpath("data") .. "/lazy/lazy-lock.json", -- lockfile generated after running update.
pkg = {
sources = { "lazy" } -- use only lazy specs
},
ui = {
... |
I've also been running into this today with random plugins failing to fetch during update/check/etc. it's almost never the same plugin, and triggering an update on that plugin individually succeeds. |
Did you previously have to configure That feature works again in the meantime. |
On Windows, |
MacOS, this issue started for me yesterday, random plugins, Failed (4)
○ indent-blankline.nvim LazyFile
fatal: unable to access 'https://github.com/lukas-reineke/indent-blankline.nvim.git/': Failed to connect to github.com port 443 after 75009 ms: Couldn't connect to server
● LuaSnip 11.62ms start
fatal: unable to access 'https://github.com/L3MON4D3/LuaSnip.git/': Failed to connect to github.com port 443 after 75003 ms: Couldn't connect to server
● telescope-file-browser.nvim 12.92ms start
fatal: unable to access 'https://github.com/nvim-telescope/telescope-file-browser.nvim.git/': Failed to connect to github.com port 443 after 75002 ms: Couldn't connect to server
● telescope-fzf-native.nvim 6.88ms telescope.nvim
fatal: unable to access 'https://github.com/nvim-telescope/telescope-fzf-native.nvim.git/': Failed to connect to github.com port 443 after 75003 ms: Couldn't connect to server |
No, I didn't set concurrency, so I assume it's pretty much default. It's still not working today so the issue isn't fixed (Linux). |
Looking at processes, I see a bunch of |
Nothing changed related to that. The code is more efficient though, so it may spawn more tasks concurrently than before. @afulki that's clearly an OS issue. Not lazy's fault. Set concurrency to a lower number |
i changed concurrency in the root level and in checker = { concurrency } to 1 still gets stuck on one fetch haha, didn't fix require("lazy").setup("dko.plugins", {
change_detection = {
enabled = false,
},
checker = {
concurrency = 1,
-- needed to get the output of require("lazy.status").updates()
enabled = true,
-- get a notification when new updates are found?
notify = false,
},
concurrency = 1,
dev = {
fallback = true,
patterns = { "davidosomething" },
},
ui = { border = require("dko.settings").get("border") },
performance = {
rtp = {
disabled_plugins = {
"gzip",
"matchit",
"matchparen", -- vim-matchup will re-load this anyway
"netrwPlugin",
"tarPlugin",
"tohtml",
"tutor",
"zipPlugin",
},
},
},
}) |
@folke That would be fine, except I've not changed concurrency (fairly default LazyVim install). This has been working for over a year with no issues until yesterday or the day before. |
@afulki like I said, the new lazy.nvim is more efficient, so can run more stuff in parallel. Whether you like it or not, you'll have to set concurrency or fix your network settings. |
@davidosomething and the |
That all looks fine. It's just a |
I just set concurrency to 1 - it still gets stuck. So something else is going on. |
Might have been an ephemeral thing for me, i was running into this last night but everything is fine this morning (no concurrency set, running macOS 14.5). |
In my case these linger around even with concurrency 1:
|
It isn't just this, I tried a brew update, and it's having issues too: fatal: unable to access 'https://github.com/Homebrew/brew/': Failed to connect to github.com port 443 after 75021 ms: Couldn't connect to server @folke thanks for the assistance, I will dig deeper |
It could be github started blocking requests? |
Setting low timeout sort of works around this a bit. |
What's the problem though is that git processes are hanging after nvim exits. So lazy.nvim fails to kill them despite saying that it did. That is a bug. |
It's a sub-process that is not detached, so it should be killed regardless by the OS when exiting Neovim |
Well, I set concurrency to 1 and this still happens. So may be github has some weird dynamic blocking once it detected something? |
To troubleshoot we'd need some non github set up and see if this is reproducible or not. |
OK, setting checker concurrency to 1 seems to be improving things. Still gets stuck if you re-run it a bunch of times in a row, but way less frequently.
I suspect github might have some anti-bot mechanism that's oversensitive. |
Yeah, probably more like max X requests per hour, so concurrency might not be that relevant. |
I wonder what the best option for lazy is here. It's annoying that Github doesn't publish their dynamic rate limiting policies for simple |
Is there a way to specify a short timeout in the git request to break that stall on the client side? Failures are better than hanging waits. |
Seem like that doesn't work, see https://medium.com/@liwp.stephen/solve-the-git-fetch-origin-hangs-issue-again-457f645305d4 |
you could lower the lazy git timeout though |
I tried that, but that I/O blocking makes it hang still. I.e. I want git itself abort sooner if the server is doing something weird. |
I could change the lazy code to use |
Yeah, I saw that article. I guess there is no easy way. Could be a missing git feature. |
Hmm
https://git-scm.com/book/en/v2/Git-Internals-Environment-Variables |
Can also check your limits on open file descriptors and max connects so ckets etc? I have all of those raised so that's maybe why I'm not seeing issues. I have around 78 plugins |
I don't think I touched those. Which kernel variables did you increase? |
you can check your limits with And then your current values are in /proc. You can find lots of guides online that can help you. If you did not raise your limits, then that may also cause issues Not home right now |
I'll take a look at limits a bit later, but just checking what happens in git when it's stuck that way, it is I/O like I thought:
|
I'm running into this issue as well including orphaned git processes. |
So, I checked my limits for open file descriptors:
I tried increasing the soft limit, that didn't really resolve it. |
Random guess, but may be it's about initiating requests in parallel at the same time (i.e. github not liking that)? I.e. may be staggering them with small delays (rather than completely dispatching them serially with no concurrency) could mitigate it? I can try writing some tests to simulate different scenarios. |
Huh, even a simple script like that gets stuck (sometimes) and it's not even parallelized:
I doubt it's related to limits. Something weird is going on on github's side most likely. |
Another weird detail - just tried lazy checks over VPN - and it's not getting stuck even with full parallelism! Even more reason to suspect github now. |
I've been having the same problem, and I'm starting to suspect the same. |
After recent lazy update I don't see this issue anymore. |
I hadn't updated lazy yet, but it's not happening for me anymore too. |
Yup, same with me. Oh well 🤷♂️ |
Indeed probably was a GitHub issue. I was also getting a lot of timeout issues from GitHub release/tag Atom feeds I was following, that ended around the time of @shmerl's comment. |
I've had the same issue over the last week. I'm on Debian 12, behind a VPN if that matters. |
I set concurrency = 1 and that seem to fix the issue. Seems like a bug to me. I mean I have 16 threads seems odd it timeouts out. I play more with concurrency though. |
Had this problem for ages and today finally got around to try and fix it - this page here was a godsend. Let's just say that if you have over 200 plugins (yeah just trying out "a couple" right now) and a Threadripper processor it's absolutely imperative to set that concurrency limit. Github starts to throw handshake errors at around 120 plugin checks with default settings. "concurrency = 8" seems to work fine. Lazy UI looks a lot better now too, I just assumed an endless list of "Fetch" lines was the way it's supposed to be. :) |
Did you check docs and existing issues?
Neovim version (nvim -v)
Operating system/version
Debian testing Linux
Other
Describe the bug
Since recent updates to lazy.nvim (main branch) when checking for plugins updates, it often doesn't finish fetch commands which looks like this:
For a while:

And then:

Steps To Reproduce
Just try to run
:Lazy check
Expected Behavior
Fetch should complete in reasonable time.
Repro
The text was updated successfully, but these errors were encountered: