-
I'm trying to display the current model in use on the statusline. I can't use the config, because different prompts will use different models. Example: the default model is |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 2 replies
-
Nope, but I plan to rework how models/agents/contexts are stored (e.g instead of storing the value passed in ask on chat config, store it as sticky prompt, which is something that you can see and also parse easily) |
Beta Was this translation helpful? Give feedback.
-
This works for me: local async = require 'plenary.async'
local chat = require 'CopilotChat'
local model = ''
local resolving_model = false
local function copilot_chat_model()
if resolving_model then
return model
end
resolving_model = true
async.run(function()
local resolved_model = chat.resolve_model()
if resolved_model then
model = resolved_model
end
resolving_model = false
end)
return model
end Without the |
Beta Was this translation helpful? Give feedback.
Should be possible to resolve easily now after: #855
Through:
chat.resolve_model
Just be aware this is using plenary.async so when using it ideally use it inside of async.run(function()
end) call