-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
Add doc note on memory usage of read_sql with chunksize #10693
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
It should give you memory improvement if the db api/engine is hooked up correctly, right? Using it with sqlalchemy+psycopg2 it needs to make sure that However, sqlalchemy's
Maybe pandas could try and set that flag if |
Any update on this issue? |
Does anyone know which parameters should be passed to iterate over Clickhouse query with the stream? |
As this typically does not give you much memory usage improvement (which is a bit unexpected from the keyword explanation), this is worth a note in the docs.
From some discussion on gitter: https://gitter.im/pydata/pandas?at=55b61bf952d85d450f404be1 (with @litaotao) and https://gitter.im/pydata/pandas?at=554609295edd84254582fb39 (with @twiecki)
The text was updated successfully, but these errors were encountered: