-
Notifications
You must be signed in to change notification settings - Fork 426
Add a connection pool delegate #1515
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Motivation: It can be useful to observe what the underlying connection pool is doing over time. This could, for example, be instrumenting it to understand how many streams are active at a given time or to understand when a connection is in the processing of being brought up. Modifications: Add new API: - `GRPCConnectionPoolDelegate` which users can implement and configure on the `GRPCChannelPool` to receive notifications about the connection pool state. - `GRPCConnectionID`, an opaque ID to distinguish between different connections in the pool. Modify the connection pool: - Per-EventLoop pools now hold on to quiescing connections and track additional state, including whether a connection is quiescing and how many streams are currently open on a connection. By contrast the pool manager (which manages a number of these per-EventLoop pools) tracks reserved streams (which are not necessarily open). The delegate tracks _opened_ streams rather than _reserved_ streams (as reserved streams are not allocated to a connection but to an any connection running on an appropriate event-loop). - Wire through the new delegate and call out to it in appropriate places. - Add a stream opened function to the internal H2 delegate - Expose various pieces of state from the connection manager Result: Users can instrument the underlying connection pool.
// If there's a close future then the underlying channel is open. It will close eventually when | ||
// open streams are closed, so drop the H2 delegate and update the pool delegate when that | ||
// happens. | ||
if let closeFuture = self._connections.values[index].manager.sync.channel?.closeFuture { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Holy Law of Demeter batman! We absolutely need a clearer spelling for this idea.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually this whole block sniffs of abstraction violation: we repeatedly reach through several layers of objects to get to the thing we want. This suggests that the intermediate objects lack sufficient API surface to do what we want.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed. It's certainly more fiddly than I'd like and as it turns out also a little over protective: we will be notified about quiescing only if there are open streams (if there aren't then the connection will just be idled). So the else
branch here is redundant. I think API for registering an on-close callback on the current connection would probably be sufficient here.
fa158e5
to
8a04180
Compare
/// | ||
/// If there is a connection, the callback will be invoked with `true` when the connection is | ||
/// closed. Otherwise the callback is invoked with `false`. | ||
internal func onCurrentConnectionClose(_ onClose: @escaping (Bool) -> Void) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a somewhat tricky API to use. Is there a reason it doesn't return whether the callback got registered?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to hop onto the right EL first which makes this slightly awkward. A few options:
- let the caller provide a promise / return a future
- expose this via the
sync
view and return whether it was registered (could be in addition to above) - use a lock in favour of the EL for synchronising here
- pass whether the callback was registered into the callback (current api)
I don't think any of these stand out as significantly better choices for an internal API. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that's fair. I dunno man, it's all kinda gross. This works I suppose.
Motivation: It can be useful to observe what the underlying connection pool is doing over time. This could, for example, be instrumenting it to understand how many streams are active at a given time or to understand when a connection is in the processing of being brought up. Modifications: Add new API: - `GRPCConnectionPoolDelegate` which users can implement and configure on the `GRPCChannelPool` to receive notifications about the connection pool state. - `GRPCConnectionID`, an opaque ID to distinguish between different connections in the pool. Modify the connection pool: - Per-EventLoop pools now hold on to quiescing connections and track additional state, including whether a connection is quiescing and how many streams are currently open on a connection. By contrast the pool manager (which manages a number of these per-EventLoop pools) tracks reserved streams (which are not necessarily open). The delegate tracks _opened_ streams rather than _reserved_ streams (as reserved streams are not allocated to a connection but to an any connection running on an appropriate event-loop). - Wire through the new delegate and call out to it in appropriate places. - Add a stream opened function to the internal H2 delegate - Expose various pieces of state from the connection manager Result: Users can instrument the underlying connection pool.
Motivation: It can be useful to observe what the underlying connection pool is doing over time. This could, for example, be instrumenting it to understand how many streams are active at a given time or to understand when a connection is in the processing of being brought up. Modifications: Add new API: - `GRPCConnectionPoolDelegate` which users can implement and configure on the `GRPCChannelPool` to receive notifications about the connection pool state. - `GRPCConnectionID`, an opaque ID to distinguish between different connections in the pool. Modify the connection pool: - Per-EventLoop pools now hold on to quiescing connections and track additional state, including whether a connection is quiescing and how many streams are currently open on a connection. By contrast the pool manager (which manages a number of these per-EventLoop pools) tracks reserved streams (which are not necessarily open). The delegate tracks _opened_ streams rather than _reserved_ streams (as reserved streams are not allocated to a connection but to an any connection running on an appropriate event-loop). - Wire through the new delegate and call out to it in appropriate places. - Add a stream opened function to the internal H2 delegate - Expose various pieces of state from the connection manager Result: Users can instrument the underlying connection pool.
Motivation:
It can be useful to observe what the underlying connection pool is doing over time. This could, for example, be instrumenting it to understand how many streams are active at a given time or to understand when a connection is in the processing of being brought up.
Modifications:
Add new API:
GRPCConnectionPoolDelegate
which users can implement and configure on theGRPCChannelPool
to receive notifications about the connection pool state.GRPCConnectionID
, an opaque ID to distinguish between different connections in the pool.Modify the connection pool:
Result: