Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Actually init logging using Zap #267

Merged
merged 1 commit into from
Feb 12, 2025
Merged

Conversation

tchap
Copy link
Contributor

@tchap tchap commented Jan 31, 2025

This is bit of an optimistic attempt of mine to push Zap through
since controllers typically use Zap these days AFAIK.

Feel free to push back since there is a potential issue with the flags not being compatible.
This is somehow mitigated by supporting -v manually.

Related to #251

You can compare the output before vs after:

I0131 20:00:45.986907   15562 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0131 20:00:45.986921   15562 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0131 20:00:45.986926   15562 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0131 20:00:45.986931   15562 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0131 20:00:45.987145   15562 runserver.go:150] Starting controller manager
I0131 20:00:45.987181   15562 main.go:176] Starting metrics HTTP handler ...
I0131 20:00:45.987356   15562 server.go:208] "Starting metrics server" logger="controller-runtime.metrics"
I0131 20:00:45.987387   15562 main.go:160] Health server listening on port: 9003
I0131 20:00:45.987393   15562 runserver.go:122] Ext-proc server listening on port: 9002
I0131 20:00:45.987464   15562 provider.go:68] Initialized pods and metrics: []
I0131 20:00:45.987485   15562 server.go:247] "Serving metrics server" logger="controller-runtime.metrics" bindAddress=":8080" secure=false
I0131 20:00:45.987724   15562 controller.go:198] "Starting EventSource" controller="inferencemodel" controllerGroup="inference.networking.x-k8s.io" controllerKind="InferenceModel" source="kind source: *v1alpha1.InferenceModel"
I0131 20:00:45.987724   15562 controller.go:198] "Starting EventSource" controller="endpointslice" controllerGroup="discovery.k8s.io" controllerKind="EndpointSlice" source="kind source: *v1.EndpointSlice"
I0131 20:00:45.987724   15562 controller.go:198] "Starting EventSource" controller="inferencepool" controllerGroup="inference.networking.x-k8s.io" controllerKind="InferencePool" source="kind source: *v1alpha1.InferencePool"
E0131 20:00:45.999749   15562 kind.go:76] "failed to get informer from cache" err="unable to retrieve the complete list of server APIs: inference.networking.x-k8s.io/v1alpha1: no matches for inference.networking.x-k8s.io/v1alpha1, Resource=" logger="controller-runtime.source.EventHandler"
E0131 20:00:46.001241   15562 kind.go:71] "if kind is a CRD, it should be installed before calling Start" err="no matches for kind \"InferencePool\" in version \"inference.networking.x-k8s.io/v1alpha1\"" logger="controller-runtime.source.EventHandler" kind="InferencePool.inference.networking.x-k8s.io"
I0131 20:00:46.002027   15562 reflector.go:376] Caches populated for *v1.EndpointSlice from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:251
I0131 20:00:46.100242   15562 controller.go:233] "Starting Controller" controller="endpointslice" controllerGroup="discovery.k8s.io" controllerKind="EndpointSlice"
I0131 20:00:46.100289   15562 controller.go:242] "Starting workers" controller="endpointslice" controllerGroup="discovery.k8s.io" controllerKind="EndpointSlice" worker count=1
^CI0131 20:00:52.654601   15562 internal.go:538] "Stopping and waiting for non leader election runnables"
I0131 20:00:52.654651   15562 internal.go:542] "Stopping and waiting for leader election runnables"
I0131 20:00:52.654695   15562 controller.go:262] "Shutdown signal received, waiting for all workers to finish" controller="endpointslice" controllerGroup="discovery.k8s.io" controllerKind="EndpointSlice"
I0131 20:00:52.654722   15562 controller.go:264] "All workers finished" controller="endpointslice" controllerGroup="discovery.k8s.io" controllerKind="EndpointSlice"
I0131 20:00:52.654706   15562 controller.go:233] "Starting Controller" controller="inferencemodel" controllerGroup="inference.networking.x-k8s.io" controllerKind="InferenceModel"
I0131 20:00:52.654729   15562 controller.go:233] "Starting Controller" controller="inferencepool" controllerGroup="inference.networking.x-k8s.io" controllerKind="InferencePool"
I0131 20:00:52.654752   15562 controller.go:242] "Starting workers" controller="inferencemodel" controllerGroup="inference.networking.x-k8s.io" controllerKind="InferenceModel" worker count=1
I0131 20:00:52.654780   15562 controller.go:262] "Shutdown signal received, waiting for all workers to finish" controller="inferencemodel" controllerGroup="inference.networking.x-k8s.io" controllerKind="InferenceModel"
I0131 20:00:52.654797   15562 controller.go:264] "All workers finished" controller="inferencemodel" controllerGroup="inference.networking.x-k8s.io" controllerKind="InferenceModel"
I0131 20:00:52.654791   15562 controller.go:242] "Starting workers" controller="inferencepool" controllerGroup="inference.networking.x-k8s.io" controllerKind="InferencePool" worker count=1
I0131 20:00:52.654842   15562 controller.go:262] "Shutdown signal received, waiting for all workers to finish" controller="inferencepool" controllerGroup="inference.networking.x-k8s.io" controllerKind="InferencePool"
I0131 20:00:52.654853   15562 controller.go:264] "All workers finished" controller="inferencepool" controllerGroup="inference.networking.x-k8s.io" controllerKind="InferencePool"
I0131 20:00:52.654869   15562 internal.go:550] "Stopping and waiting for caches"
I0131 20:00:52.655007   15562 internal.go:554] "Stopping and waiting for webhooks"
I0131 20:00:52.655037   15562 internal.go:557] "Stopping and waiting for HTTP servers"
I0131 20:00:52.655075   15562 server.go:254] "Shutting down metrics server with timeout of 1 minute" logger="controller-runtime.metrics"
I0131 20:00:52.655168   15562 internal.go:561] "Wait completed, proceeding to shutdown the manager"
I0131 20:00:52.655186   15562 runserver.go:155] Controller manager shutting down
I0131 20:00:52.655197   15562 main.go:133] Health server shutting down
I0131 20:00:52.655250   15562 main.go:137] Ext-proc server shutting down
I0131 20:00:52.655268   15562 main.go:166] Health server shutting down
I0131 20:00:52.655282   15562 main.go:147] All components shutdown
I0131 20:00:52.655292   15562 runserver.go:140] Ext-proc server shutting down
2025-01-31T20:02:31+01:00	INFO	Starting controller manager
2025-01-31T20:02:31+01:00	INFO	Starting metrics HTTP handler ...
2025-01-31T20:02:31+01:00	INFO	controller-runtime.metrics	Starting metrics server
2025-01-31T20:02:31+01:00	INFO	Health server listening on port: 9003
2025-01-31T20:02:31+01:00	INFO	controller-runtime.metrics	Serving metrics server	{"bindAddress": ":8080", "secure": false}
2025-01-31T20:02:31+01:00	INFO	Ext-proc server listening on port: 9002
2025-01-31T20:02:31+01:00	INFO	Initialized pods and metrics: []
2025-01-31T20:02:31+01:00	INFO	Starting EventSource	{"controller": "endpointslice", "controllerGroup": "discovery.k8s.io", "controllerKind": "EndpointSlice", "source": "kind source: *v1.EndpointSlice"}
2025-01-31T20:02:31+01:00	INFO	Starting EventSource	{"controller": "inferencepool", "controllerGroup": "inference.networking.x-k8s.io", "controllerKind": "InferencePool", "source": "kind source: *v1alpha1.InferencePool"}
2025-01-31T20:02:31+01:00	INFO	Starting EventSource	{"controller": "inferencemodel", "controllerGroup": "inference.networking.x-k8s.io", "controllerKind": "InferenceModel", "source": "kind source: *v1alpha1.InferenceModel"}
2025-01-31T20:02:32+01:00	ERROR	controller-runtime.source.EventHandler	if kind is a CRD, it should be installed before calling Start	{"kind": "InferenceModel.inference.networking.x-k8s.io", "error": "no matches for kind \"InferenceModel\" in version \"inference.networking.x-k8s.io/v1alpha1\""}
sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1.1
	/Users/ondrejkupka/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/source/kind.go:71
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1
	/Users/ondrejkupka/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/loop.go:53
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext
	/Users/ondrejkupka/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/loop.go:54
k8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel
	/Users/ondrejkupka/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/poll.go:33
sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1
	/Users/ondrejkupka/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/source/kind.go:64
2025-01-31T20:02:32+01:00	ERROR	controller-runtime.source.EventHandler	if kind is a CRD, it should be installed before calling Start	{"kind": "InferencePool.inference.networking.x-k8s.io", "error": "no matches for kind \"InferencePool\" in version \"inference.networking.x-k8s.io/v1alpha1\""}
sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1.1
	/Users/ondrejkupka/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/source/kind.go:71
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1
	/Users/ondrejkupka/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/loop.go:53
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext
	/Users/ondrejkupka/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/loop.go:54
k8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel
	/Users/ondrejkupka/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/poll.go:33
sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1
	/Users/ondrejkupka/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/source/kind.go:64
2025-01-31T20:02:32+01:00	INFO	Starting Controller	{"controller": "endpointslice", "controllerGroup": "discovery.k8s.io", "controllerKind": "EndpointSlice"}
2025-01-31T20:02:32+01:00	INFO	Starting workers	{"controller": "endpointslice", "controllerGroup": "discovery.k8s.io", "controllerKind": "EndpointSlice", "worker count": 1}
^C2025-01-31T20:02:33+01:00	INFO	Stopping and waiting for non leader election runnables
2025-01-31T20:02:33+01:00	INFO	Stopping and waiting for leader election runnables
2025-01-31T20:02:33+01:00	INFO	Shutdown signal received, waiting for all workers to finish	{"controller": "endpointslice", "controllerGroup": "discovery.k8s.io", "controllerKind": "EndpointSlice"}
2025-01-31T20:02:33+01:00	INFO	Starting Controller	{"controller": "inferencepool", "controllerGroup": "inference.networking.x-k8s.io", "controllerKind": "InferencePool"}
2025-01-31T20:02:33+01:00	INFO	All workers finished	{"controller": "endpointslice", "controllerGroup": "discovery.k8s.io", "controllerKind": "EndpointSlice"}
2025-01-31T20:02:33+01:00	INFO	Starting Controller	{"controller": "inferencemodel", "controllerGroup": "inference.networking.x-k8s.io", "controllerKind": "InferenceModel"}
2025-01-31T20:02:33+01:00	INFO	Starting workers	{"controller": "inferencemodel", "controllerGroup": "inference.networking.x-k8s.io", "controllerKind": "InferenceModel", "worker count": 1}
2025-01-31T20:02:33+01:00	INFO	Shutdown signal received, waiting for all workers to finish	{"controller": "inferencemodel", "controllerGroup": "inference.networking.x-k8s.io", "controllerKind": "InferenceModel"}
2025-01-31T20:02:33+01:00	INFO	All workers finished	{"controller": "inferencemodel", "controllerGroup": "inference.networking.x-k8s.io", "controllerKind": "InferenceModel"}
2025-01-31T20:02:33+01:00	INFO	Starting workers	{"controller": "inferencepool", "controllerGroup": "inference.networking.x-k8s.io", "controllerKind": "InferencePool", "worker count": 1}
2025-01-31T20:02:33+01:00	INFO	Shutdown signal received, waiting for all workers to finish	{"controller": "inferencepool", "controllerGroup": "inference.networking.x-k8s.io", "controllerKind": "InferencePool"}
2025-01-31T20:02:33+01:00	INFO	All workers finished	{"controller": "inferencepool", "controllerGroup": "inference.networking.x-k8s.io", "controllerKind": "InferencePool"}
2025-01-31T20:02:33+01:00	INFO	Stopping and waiting for caches
2025-01-31T20:02:33+01:00	INFO	Stopping and waiting for webhooks
2025-01-31T20:02:33+01:00	INFO	Stopping and waiting for HTTP servers
2025-01-31T20:02:33+01:00	INFO	controller-runtime.metrics	Shutting down metrics server with timeout of 1 minute
2025-01-31T20:02:33+01:00	INFO	Wait completed, proceeding to shutdown the manager
2025-01-31T20:02:33+01:00	INFO	Controller manager shutting down
2025-01-31T20:02:33+01:00	INFO	Health server shutting down
2025-01-31T20:02:33+01:00	INFO	Ext-proc server shutting down
2025-01-31T20:02:33+01:00	INFO	Health server shutting down
2025-01-31T20:02:33+01:00	INFO	All components shutdown
2025-01-31T20:02:33+01:00	INFO	Ext-proc server shutting down

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Jan 31, 2025
@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Jan 31, 2025
@k8s-ci-robot
Copy link
Contributor

Hi @tchap. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Copy link

netlify bot commented Jan 31, 2025

Deploy Preview for gateway-api-inference-extension ready!

Name Link
🔨 Latest commit 32f53cd
🔍 Latest deploy log https://app.netlify.com/sites/gateway-api-inference-extension/deploys/67ad22e38c2c680008ed868d
😎 Deploy Preview https://deploy-preview-267--gateway-api-inference-extension.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@k8s-ci-robot k8s-ci-robot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Jan 31, 2025
@liu-cong
Copy link
Contributor

Thanks for driving this! Can you add a bit more context on the motivation to move to zap? Maybe there are obvious reasons. I just don't have the context.

Feel free to push back since there is a potential issue with the flags not being compatible.

Can you elaborate on the "issue"? We definitely need to fix the issue to move forward.

You can compare the output before vs after:

The "after" doesn't have the go file and line numbers, which are important for debugging.

@tchap
Copy link
Contributor Author

tchap commented Feb 1, 2025

Can you add a bit more context on the motivation to move to zap?

I basically just wanted to get rid of the TODO logger init, so I went out to check what others are doing. Checked what kubebuilder generates for a new project, also checked https://github.com/kubernetes-sigs/kueue. They both use Zap for logging, so I just went with that.

Can you elaborate on the "issue"? We definitely need to fix the issue to move forward.

The flags being bound are just different:

https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/log/zap#Options.BindFlags

https://pkg.go.dev/k8s.io/klog/[email protected]#InitFlags

I noticed that -v is being used somewhere in a YAML file. Actually -zap-log-level accepts the same values, so you can just use that flag, but I also added an option to still use -v. There are obviously many other flags that are just different. Not sure how much that matters and is being actually used.

The "after" doesn't have the go file and line numbers, which are important for debugging.

Will check that out later today, thanks.

@tchap
Copy link
Contributor Author

tchap commented Feb 1, 2025

Alright, added line numbers:

2025-02-01T08:25:32+01:00	INFO	server/runserver.go:150	Starting controller manager
2025-02-01T08:25:32+01:00	INFO	ext-proc/main.go:204	Starting metrics HTTP handler ...
2025-02-01T08:25:32+01:00	INFO	ext-proc/main.go:188	Health server listening on port: 9003
2025-02-01T08:25:32+01:00	INFO	server/runserver.go:122	Ext-proc server listening on port: 9002
2025-02-01T08:25:32+01:00	INFO	controller-runtime.metrics	server/server.go:208	Starting metrics server
2025-02-01T08:25:32+01:00	INFO	backend/provider.go:68	Initialized pods and metrics: []
2025-02-01T08:25:32+01:00	INFO	controller-runtime.metrics	server/server.go:247	Serving metrics server	{"bindAddress": ":8080", "secure": false}
2025-02-01T08:25:32+01:00	INFO	controller/controller.go:198	Starting EventSource	{"controller": "endpointslice", "controllerGroup": "discovery.k8s.io", "controllerKind": "EndpointSlice", "source": "kind source: *v1.EndpointSlice"}
2025-02-01T08:25:32+01:00	INFO	controller/controller.go:198	Starting EventSource	{"controller": "inferencepool", "controllerGroup": "inference.networking.x-k8s.io", "controllerKind": "InferencePool", "source": "kind source: *v1alpha1.InferencePool"}
2025-02-01T08:25:32+01:00	INFO	controller/controller.go:198	Starting EventSource	{"controller": "inferencemodel", "controllerGroup": "inference.networking.x-k8s.io", "controllerKind": "InferenceModel", "source": "kind source: *v1alpha1.InferenceModel"}
2025-02-01T08:25:32+01:00	ERROR	controller-runtime.source.EventHandler	source/kind.go:71	if kind is a CRD, it should be installed before calling Start	{"kind": "InferencePool.inference.networking.x-k8s.io", "error": "no matches for kind \"InferencePool\" in version \"inference.networking.x-k8s.io/v1alpha1\""}
sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1.1
	/Users/ondrejkupka/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/source/kind.go:71
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1
	/Users/ondrejkupka/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/loop.go:53
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext
	/Users/ondrejkupka/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/loop.go:54
k8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel
	/Users/ondrejkupka/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/poll.go:33
sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1
	/Users/ondrejkupka/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/source/kind.go:64
2025-02-01T08:25:32+01:00	ERROR	controller-runtime.source.EventHandler	source/kind.go:71	if kind is a CRD, it should be installed before calling Start	{"kind": "InferenceModel.inference.networking.x-k8s.io", "error": "no matches for kind \"InferenceModel\" in version \"inference.networking.x-k8s.io/v1alpha1\""}
sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1.1
Add caller to log entries
	/Users/ondrejkupka/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/source/kind.go:71
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1
	/Users/ondrejkupka/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/loop.go:53
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext
	/Users/ondrejkupka/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/loop.go:54
k8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel
	/Users/ondrejkupka/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/poll.go:33
sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1
	/Users/ondrejkupka/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/source/kind.go:64
2025-02-01T08:25:32+01:00	INFO	controller/controller.go:233	Starting Controller	{"controller": "endpointslice", "controllerGroup": "discovery.k8s.io", "controllerKind": "EndpointSlice"}
2025-02-01T08:25:32+01:00	INFO	controller/controller.go:242	Starting workers	{"controller": "endpointslice", "controllerGroup": "discovery.k8s.io", "controllerKind": "EndpointSlice", "worker count": 1}
2025-02-01T08:25:42+01:00	ERROR	controller-runtime.source.EventHandler	source/kind.go:71	if kind is a CRD, it should be installed before calling Start	{"kind": "InferencePool.inference.networking.x-k8s.io", "error": "no matches for kind \"InferencePool\" in version \"inference.networking.x-k8s.io/v1alpha1\""}
sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1.1
	/Users/ondrejkupka/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/source/kind.go:71
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func2
	/Users/ondrejkupka/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/loop.go:87
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext
	/Users/ondrejkupka/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/loop.go:88
k8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel
	/Users/ondrejkupka/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/poll.go:33
sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1
	/Users/ondrejkupka/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/source/kind.go:64
2025-02-01T08:25:42+01:00	ERROR	controller-runtime.source.EventHandler	source/kind.go:71	if kind is a CRD, it should be installed before calling Start	{"kind": "InferenceModel.inference.networking.x-k8s.io", "error": "no matches for kind \"InferenceModel\" in version \"inference.networking.x-k8s.io/v1alpha1\""}
sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1.1
	/Users/ondrejkupka/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/source/kind.go:71
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func2
	/Users/ondrejkupka/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/loop.go:87
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext
	/Users/ondrejkupka/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/loop.go:88
k8s.io/apimachinery/pkg/util/wait.PollUntilContextCancel
	/Users/ondrejkupka/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/poll.go:33
sigs.k8s.io/controller-runtime/pkg/internal/source.(*Kind[...]).Start.func1
	/Users/ondrejkupka/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/source/kind.go:64
^C2025-02-01T08:25:44+01:00	INFO	manager/internal.go:538	Stopping and waiting for non leader election runnables
2025-02-01T08:25:44+01:00	INFO	manager/internal.go:542	Stopping and waiting for leader election runnables
2025-02-01T08:25:44+01:00	INFO	controller/controller.go:233	Starting Controller	{"controller": "inferencemodel", "controllerGroup": "inference.networking.x-k8s.io", "controllerKind": "InferenceModel"}
2025-02-01T08:25:44+01:00	INFO	controller/controller.go:242	Starting workers	{"controller": "inferencemodel", "controllerGroup": "inference.networking.x-k8s.io", "controllerKind": "InferenceModel", "worker count": 1}
2025-02-01T08:25:44+01:00	INFO	controller/controller.go:262	Shutdown signal received, waiting for all workers to finish	{"controller": "endpointslice", "controllerGroup": "discovery.k8s.io", "controllerKind": "EndpointSlice"}
2025-02-01T08:25:44+01:00	INFO	controller/controller.go:233	Starting Controller	{"controller": "inferencepool", "controllerGroup": "inference.networking.x-k8s.io", "controllerKind": "InferencePool"}
2025-02-01T08:25:44+01:00	INFO	controller/controller.go:264	All workers finished	{"controller": "endpointslice", "controllerGroup": "discovery.k8s.io", "controllerKind": "EndpointSlice"}
2025-02-01T08:25:44+01:00	INFO	controller/controller.go:262	Shutdown signal received, waiting for all workers to finish	{"controller": "inferencemodel", "controllerGroup": "inference.networking.x-k8s.io", "controllerKind": "InferenceModel"}
2025-02-01T08:25:44+01:00	INFO	controller/controller.go:242	Starting workers	{"controller": "inferencepool", "controllerGroup": "inference.networking.x-k8s.io", "controllerKind": "InferencePool", "worker count": 1}
2025-02-01T08:25:44+01:00	INFO	controller/controller.go:262	Shutdown signal received, waiting for all workers to finish	{"controller": "inferencepool", "controllerGroup": "inference.networking.x-k8s.io", "controllerKind": "InferencePool"}
2025-02-01T08:25:44+01:00	INFO	controller/controller.go:264	All workers finished	{"controller": "inferencepool", "controllerGroup": "inference.networking.x-k8s.io", "controllerKind": "InferencePool"}
2025-02-01T08:25:44+01:00	INFO	controller/controller.go:264	All workers finished	{"controller": "inferencemodel", "controllerGroup": "inference.networking.x-k8s.io", "controllerKind": "InferenceModel"}
2025-02-01T08:25:44+01:00	INFO	manager/internal.go:550	Stopping and waiting for caches
2025-02-01T08:25:44+01:00	INFO	manager/internal.go:554	Stopping and waiting for webhooks
2025-02-01T08:25:44+01:00	INFO	manager/internal.go:557	Stopping and waiting for HTTP servers
2025-02-01T08:25:44+01:00	INFO	controller-runtime.metrics	server/server.go:254	Shutting down metrics server with timeout of 1 minute
2025-02-01T08:25:44+01:00	INFO	manager/internal.go:561	Wait completed, proceeding to shutdown the manager
2025-02-01T08:25:44+01:00	INFO	server/runserver.go:155	Controller manager shutting down
2025-02-01T08:25:44+01:00	INFO	ext-proc/main.go:142	Health server shutting down
2025-02-01T08:25:44+01:00	INFO	ext-proc/main.go:146	Ext-proc server shutting down
2025-02-01T08:25:44+01:00	INFO	ext-proc/main.go:194	Health server shutting down
2025-02-01T08:25:44+01:00	INFO	server/runserver.go:140	Ext-proc server shutting down
2025-02-01T08:25:44+01:00	INFO	ext-proc/main.go:156	All components shutdown

@tchap tchap force-pushed the zap-logger branch 2 times, most recently from 0bbc96a to b3ea552 Compare February 1, 2025 07:39
@danehans
Copy link
Contributor

danehans commented Feb 3, 2025

@tchap please excuse our delay as we are preparing to release v0.1.0-rc1.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Feb 4, 2025
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Feb 4, 2025
@kfswain
Copy link
Collaborator

kfswain commented Feb 10, 2025

If our goal is to remove the TODO() (and fair enough) I wonder if there is a thinner way to do this.

This is a bit of a personal opinion, but I don't love zap + controllerruntime logging, I've used it before and had issues with ergonomics.
I don't feel so strongly that I simply cannot work with zap. But I've intentionally used klog in the reconcilers bc that lets us have a unified logging experience & format (even outside of the reconciler logic)

LMKWYT

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Feb 10, 2025
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Feb 11, 2025
@tchap
Copy link
Contributor Author

tchap commented Feb 11, 2025

If our goal is to remove the TODO() (and fair enough) I wonder if there is a thinner way to do this.

This is a bit of a personal opinion, but I don't love zap + controllerruntime logging, I've used it before and had issues with ergonomics. I don't feel so strongly that I simply cannot work with zap. But I've intentionally used klog in the reconcilers bc that lets us have a unified logging experience & format (even outside of the reconciler logic)

LMKWYT

I tried using k8s.io/klog/v2/klogr, but actually all functions in that package are marked as deprecated and the executable gets stuck on a deadlock when I try to use klogr.New. For the rest, a list of compatible loggers is available in logr README.

I don't really have a strong preference, I did use Zap in the past, but not really wrapped. The log format seems pretty readable for me. It's true the way it works with verbosity levels is not super elegant. Feel free to point me to a Logger implementation that would work better.

I must say, though, that you never really use Zap in your executable code anyway. You just get a logr.Logger, which seems ergonomically OK? So we just need to decide on the underlying implementation to supply...

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Feb 11, 2025
@kfswain
Copy link
Collaborator

kfswain commented Feb 12, 2025

I must say, though, that you never really use Zap in your executable code anyway.
Yeah, fair points, this is unquestionably better than TODO() also.

/lgtm
/approve

Thanks for the help!

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Feb 12, 2025
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: kfswain, tchap

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Feb 12, 2025
Controllers typically use Zap these days.
The only potential issue is that the flags are not compatible.
This is somehow mitigated by supporting -v explicitly.
@k8s-ci-robot k8s-ci-robot removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Feb 12, 2025
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Feb 12, 2025
@kfswain
Copy link
Collaborator

kfswain commented Feb 12, 2025

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Feb 12, 2025
@k8s-ci-robot k8s-ci-robot merged commit 242b73e into kubernetes-sigs:main Feb 12, 2025
8 checks passed
@tchap tchap deleted the zap-logger branch February 12, 2025 22:50
coolkp pushed a commit to coolkp/llm-instance-gateway that referenced this pull request Feb 13, 2025
Controllers typically use Zap these days.
The only potential issue is that the flags are not compatible.
This is somehow mitigated by supporting -v explicitly.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants