Skip to content

Add GATEWAYS field to kubectl get httproute output #586

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
mark-church opened this issue Mar 15, 2021 · 21 comments
Closed

Add GATEWAYS field to kubectl get httproute output #586

mark-church opened this issue Mar 15, 2021 · 21 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@mark-church
Copy link
Contributor

What would you like to be added:
Add a GATEWAYS field to the kubectl get httproute output. Thanks to the feedback from @joshrosso I realize it's challenging to easily understand which Gateways a Route is bound to or whether it's bound at all. This field would key off of the HTTPRoute status which indicates the Gateways that a Route is bound to: https://gateway-api.sigs.k8s.io/spec/#networking.x-k8s.io/v1alpha1.RouteGatewayStatus

The below example shows HTTPRoutes, some which are bound to different Gateways, some bound to multiple Gateways, and one that's not bound to any Gateway (which is probably an accident that the deployer doesn't know yet). This would be really useful because it provides a quick snapshot of all the Gateways being used in a certain Namespace without having to dig into each Route individually. It also provides a very quick check to validate that a Route has been bound to a Gateway successfully after creation.

$ kubectl get httproute

NAME         HOSTNAMES                    GATEWAYS
route1       ["bar.example.com"]          prod-gw
route2       ["bar.example.com"]          dev-gw
route3       ["foo.example.com"]          dev-gw, staging-gw
route4       ["foo.example.com"]

cc @robscott @bowei @stevesloka @hbagdi

@mark-church mark-church added the kind/feature Categorizes issue or PR as related to a new feature. label Mar 15, 2021
@SantoDE
Copy link
Contributor

SantoDE commented Mar 15, 2021

That would be so awesome

@robscott
Copy link
Member

Although I agree that would be awesome, we're unfortunately quite limited by what we can do with the additionalPrinterColumns field that powers this for CRDs. I've tried a variety of different json path configs here and it appears that only the most basic types are supported (more info in kubernetes/kubernetes#67268).

When reading from lists, you can actually get pretty close with something like .spec.rules[*].forwardTo[*].serviceName, but unfortunately the * seems to get mapped to 0 and only prints the first item. Usually I'd use a range like {"{"}{range .status.gateways[*]}"{.gatewayRef.Name}" {end}{"}"} to accomplish this, but that fails validation. Unfortunately I don't think this is possible with our current constraints, but if anyone knows of a way to make this work I'd love to see it.

@mark-church
Copy link
Contributor Author

Is there anyone in API-machinery that can help us (cc @thockin)? The more I use Gateway hands on, the more I see the need to make it very obvious how Routes and Gateways are bound. For example, I am testing a bunch of Routes against a bunch of different Gateways, but based on the CLI output, you'd guess that all these Routes are in conflict on the same Gateway:

$ k get httproute

NAME          HOSTNAMES
bar-route     ["bar.example.com"]
foo-route     ["foo.example.com"]
route1        ["foo.example.com"]
route2        ["foo.example.com"]
route3        ["foo.example.com"]
split-route   ["foo.example.com"]

@stevesloka
Copy link
Contributor

I've had similar needs and the only thing I could come up with is to add a metadata status type section to expose more of this to users.

Here's a draft PR on adding some bits to Contour which don't translate 100% but are similar: https://github.com/projectcontour/contour/blob/e802dd5f8d2c5bb267299bda3a247d461be7b13d/design/httpproxy-status-meta.md

@robscott
Copy link
Member

I think adding a new field to status may be our best option here, even if it is repetitive. For example, we could add a new connectedGateways []string field that could be printed out with the existing additionalPrinterColumns json path support.
Alternatively, if anyone felt like building out a kubectl plugin for making sense of the connections in this API, that could be quite useful. Also need to double check with someone in sig-apimachinery that more advanced jsonPath interpretations are still impossible in additionalPrinterColumns.

For reference, on a 1.19 cluster using kubectl 1.20, the following variations kinda work but only return the first item in the list:

.spec.rules[*].forwardTo[*].serviceName
.spec.rules[:].forwardTo[:].serviceName
.spec.rules..forwardTo..serviceName

And of course a list like this fails validation:

{"{"}{range .status.gateways[*]}"{.gatewayRef.Name}" {end}{"}"}

@lavalamp do you know if anything like this should work as a value for additionalPrinterColumns?

@robscott
Copy link
Member

Quick follow up on this: I got some help from @apelisse in tracking down how this is implemented. This behavior is intentional, the code picks the first matching result. Although we can make a feature request for more advanced support here, we likely need a better solution until we get to that point.

I think that means at least one of the following:

  1. A new []string field on route status with a namespace/name format that can be used to track the connected Gateways.
  2. A kubectl plugin that can provide better visualizations of the relationships between Gateways and Routes.

@SantoDE
Copy link
Contributor

SantoDE commented Mar 17, 2021

I would go with a field at first. Plug-in is nice for more features, but if im trouble shooting things and I need a plug-in I probably go mad. So my vote is on a simple field first :p

@robscott
Copy link
Member

Better reference to a tracking issue in upstream kubernetes/kubectl#517. Hope to have a PR adding a new field ready for review shortly.

@SantoDE
Copy link
Contributor

SantoDE commented Mar 18, 2021

You rock. Thanks :)

@hbagdi
Copy link
Contributor

hbagdi commented Mar 30, 2021

I think this is very useful thing to have and a feature that most users would just expect to have.

I do have concerns with the workaround with duplicate field we are considering. This is not the first time when we are running into limitations of CRD and gateway-api being out of tree and it seems a tad early to start adding hacks into our API to get this to work.
Even though a kubectl plugin is more complicated, should we consider that instead? I think a lot of custom behavior can be shipped into kubectl using a plugin, that can be very helpful for a complicated API.

@bowei
Copy link
Contributor

bowei commented Mar 30, 2021

  • What is the scope for the kubectl plugin? Just for displaying status? More?
  • How hard is it to create/maintain?
  • How much do we push for it as part how users will interact with this API vs as an example/prototype for CRD improvements?

@robscott
Copy link
Member

It is fairly easy to write a kubectl plugin in my experience, it's just much more difficult to maintain it, especially if it's built against an alpha API with breaking changes planned. Although a plugin would enable us to do much more, it may not be as intuitive/straightforward for users as using kubectl directly. I'd be concerned if our kubectl experience was entirely dependent on a plugin, but I also realize that API changes like this add a fair amount of overhead to each implementation.

@hbagdi
Copy link
Contributor

hbagdi commented Mar 31, 2021

Writing and maintaining a kubectl plugin will certainly involve non-trivial cost and I'd really like to avoid it as much as possible.
My concerns:

  • This small addition will be used in future to justify addition of such helper fields in other places, which makes the API cumbersome to consume and implement.
  • Such a field to help kubectl will surprise users. AFAIK, I've not seen such fields in upstream, happy to be corrected on this.

I'd be concerned if our kubectl experience was entirely dependent on a plugin

+1
At some point, if we are really serious, we will have to either explore moving Gateway API to upstream (not sure if it is viable) or work on tooling in upstream to improve support for CRDs.

If this was the only feature that we wanted to add to help with kubectl experience, I'd be down to adding it. I really doubt that to be the case.
Another alternate would be to wait for a little while before we ship this feature and see what other requests come up with kubectl experience and then decide a path forward.

@SantoDE
Copy link
Contributor

SantoDE commented Mar 31, 2021

Writing and maintaining a kubectl plugin will certainly involve non-trivial cost and I'd really like to avoid it as much as possible.

I soo much agree. As I said previously, having a plugin as a "requirement" for better troubleshooting is imho not that user friendly.

Im also down for wait a bit and see what other requests we have to come up with something "better"

@robscott robscott added the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label Apr 14, 2021
@robscott robscott removed this from the v0.3.0 milestone Apr 23, 2021
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 22, 2021
@jpeach
Copy link
Contributor

jpeach commented Jul 22, 2021

If this was the only feature that we wanted to add to help with kubectl experience, I'd be down to adding it. I really doubt that to be the case.
Another alternate would be to wait for a little while before we ship this feature and see what other requests come up with kubectl experience and then decide a path forward.

Yeh I agree that this is just the thin end of the wedge. IIUC Cluster API ships various versions of clusterctl and expects people to use that tooling.

It also seems likely that we would want to improve the tooling at a different rate (rapidly) than that at which the API evolves (slowly).

It is fairly easy to write a kubectl plugin in my experience, it's just much more difficult to maintain it, especially if it's built against an alpha API with breaking changes planned.

Is it just a matter of where this cost is borne? If it's not done centrally, then it needs to be specced in the API and implemented in every controller.

@robscott
Copy link
Member

One interesting alternative to the options currently being discussed (kubectl plugin vs controller populated fields) would be webhook populated fields. That would also result in central cost, but would not require any additional controller work or users to install a plugin.

@robscott
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 22, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 20, 2021
@jpeach
Copy link
Contributor

jpeach commented Oct 21, 2021

/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 21, 2021
@shaneutt
Copy link
Member

shaneutt commented Aug 5, 2022

While grooming this one it appears we're just at a point where this is a nice to have, but it's not easy to implement. We're going to close this as we don't expect anyone's ready to drive it forward, but if you still want this feature and have a strong use case we will be happy to reconsider this and re-open.

@shaneutt shaneutt closed this as not planned Won't fix, can't repro, duplicate, stale Aug 5, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

Successfully merging a pull request may close this issue.