-
Notifications
You must be signed in to change notification settings - Fork 1k
client: add close method for client pool #797
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BTW, can you help us fix the CI 😄
client/pool.go
Outdated
@@ -143,6 +145,9 @@ func (pool *Pool) GetStats(stats *ConnectionStats) { | |||
// GetConn returns connection from the pool or create new | |||
func (pool *Pool) GetConn(ctx context.Context) (*Conn, error) { | |||
for { | |||
if atomic.LoadUint32(&pool.closed) == 1 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure about your usage. Will you call Close concurrent with GetConn?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In normal condition, we won't call Close concurrent with GetConn. I add this line is in order to tell caller that pool was closed when someone calls GetConn after Close.
client: wait connection producer exit in pool.Close()
client: fix ci
@lance6716 PTAL |
@@ -224,10 +231,14 @@ func (pool *Pool) putConnectionUnsafe(connection Connection) { | |||
} | |||
|
|||
func (pool *Pool) newConnectionProducer() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found another problem, please help me check it.
in line 271, pool.readyConnection <- connection
may block the for loop. So if Close
tries to drain pool.readyConnection
first and then newConnectionProducer
sends at channel, it will become deadlock.
Generally, I think a <-pool.ctx.Done()
branch with pool.readyConnection <- connection
will help to avoid blocking. And pool.ctx.Err() != nil
can replace pool.closed
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replacing pool.closed with ctx.Err is a good idea, I wiil update with it. Thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rest lgtm
I'll merge it after you reply to the comment
client/pool.go
Outdated
@@ -298,6 +312,9 @@ func (pool *Pool) closeOldIdleConnections() { | |||
ticker := time.NewTicker(5 * time.Second) | |||
|
|||
for range ticker.C { | |||
if pool.ctx.Err() != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not a big problem. If you change it to
select {
case <- ctx: ...
case <- ticker: ...
}
it will not require 5 seconds to exit at worst.
And if this goroutine is also tracked by pool.wg, we won't have temporary goroutine leak after pool is closed. Some unit test tools may complain about it, like https://github.com/uber-go/goleak
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lance6716, PTAL, it's updated according to your suggestion.
PR for this issue #794