Skip to content

Connection pool throws error instead of blocking until connection is available: redis.exceptions.ConnectionError: Too many connections #2517

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Fogapod opened this issue Dec 16, 2022 · 3 comments

Comments

@Fogapod
Copy link

Fogapod commented Dec 16, 2022

Version: 4.4.0
Platform: Linux
Description: Connection pool works in unexpected way: when connection limit is reached it throws an error.

Code to reproduce:

import asyncio

from redis import asyncio as redis


async def main() -> None:
    async with redis.Redis(max_connections=2) as rd:
        await asyncio.gather(*[rd.set(str(i), i) for i in range(100)])


asyncio.run(main())

By default redis sets connection limit to ridiculously high number: #2220
This hides the issue with pool. When set so something realistic like 50 or 100 it starts throwing redis.exceptions.ConnectionError: Too many connections under load.

This bug was introduced in aioredis 2.0, here is relevant issue in aioredis repo: aio-libs-abandoned/aioredis-py#1173
This comment in particular explains the problem: aio-libs-abandoned/aioredis-py#1173 (comment)

aredis has the same design for some reason: NoneGG/aredis#167
Other than that I am unfamiliar with async connection pool implementations that would throw errors instead of waiting so I think this is a bug.

I think sync pool works in the same way. Probably not a desired behaviour for sync apps as well.

@Fogapod
Copy link
Author

Fogapod commented Dec 17, 2022

Looking at source I found BlockingConnectionPool class that works as expected. For some reason it isn't mentioned in documentation and honestly it should be the default pool.

BlockingConnectionPool can be used this way:

import asyncio

from redis import asyncio as redis

async def main() -> None:
    pool = redis.BlockingConnectionPool(max_connections=2)
    async with redis.Redis(connection_pool=pool) as rd:
        await asyncio.gather(*[rd.set(str(i), i) for i in range(100)])


asyncio.run(main())

It solves both problems: has sane number of connections by default and blocks instead of raising exception so that all operations complete successfully.

Another note is that redis client accepting pool object means it ignores quite a lot of configuration present in Redis.__init__, all these parameters are silently ignored and you would have to replicate proper pool setup yourself. A nore pythonic way would be accepting pool_cls and passing pool spicific **kwargs to implementation

@cancan101
Copy link

I created this PR: #2618 as a way to pass in the ConnectionPool's class. Otherwise I think all this logic and these defaults have to be copy and pasted outside where the ConnectionPool is then created:

encoding: str = "utf-8",
encoding_errors: str = "strict",
decode_responses: bool = False,
retry_on_timeout: bool = False,
retry_on_error: Optional[list] = None,
ssl: bool = False,
ssl_keyfile: Optional[str] = None,
ssl_certfile: Optional[str] = None,
ssl_cert_reqs: str = "required",
ssl_ca_certs: Optional[str] = None,
ssl_ca_data: Optional[str] = None,
ssl_check_hostname: bool = False,

# based on input, setup appropriate connection args
if unix_socket_path is not None:
kwargs.update(
{
"path": unix_socket_path,
"connection_class": UnixDomainSocketConnection,
}
)
else:
# TCP specific options
kwargs.update(
{
"host": host,
"port": port,
"socket_connect_timeout": socket_connect_timeout,
"socket_keepalive": socket_keepalive,
"socket_keepalive_options": socket_keepalive_options,
}
)
if ssl:
kwargs.update(
{
"connection_class": SSLConnection,
"ssl_keyfile": ssl_keyfile,
"ssl_certfile": ssl_certfile,
"ssl_cert_reqs": ssl_cert_reqs,
"ssl_ca_certs": ssl_ca_certs,
"ssl_ca_data": ssl_ca_data,
"ssl_check_hostname": ssl_check_hostname,
}
)

Copy link
Contributor

This issue is marked stale. It will be closed in 30 days if it is not updated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants