Description
Code Sample, a copy-pastable example if possible
import psycopg2
import sqlalchemy as sa
import pandas as pd
conn = psycopg2.connect(**{CREDENTIALS})
df = pd.read_csv('select foo from bar', conn)
Problem description
read_sql
allows for an engine or a connection to be passed in as con
. When an engine is passed in, a connection is opened and closed when read_sql
is run. However, when a connection is used and read_sql
is run, the query will return but that connection will remain open. At least for Redshift (which I experienced this issue on), when a query is run on a table, a lock will remain on that table until the connection is closed. If there are any DDL jobs running on that table, it can cause those to timeout and fail.
While users more familiar with databases might anticipate this behavior, it might not be apparent for those in Data Science or Analytics for example who might not have that familiarity.
I'd suggest some type of warning be returned when a connection is passed in to read_sql
to ensure users are aware that the connection is still open and a lock may be present on the table they queried.
Expected Output
Output of pd.show_versions()
commit: None
python: 3.6.5.final.0
python-bits: 64
OS: Linux
OS-release: 4.14.146-93.123.amzn1.x86_64
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.24.2
pytest: None
pip: 19.2.3
setuptools: 41.4.0
Cython: None
numpy: 1.16.2
scipy: 1.1.0
pyarrow: None
xarray: None
IPython: 7.8.0
sphinx: None
patsy: 0.5.1
dateutil: 2.8.0
pytz: 2019.3
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: 3.1.1
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml.etree: None
bs4: None
html5lib: None
sqlalchemy: 1.3.9
pymysql: None
psycopg2: 2.7.5 (dt dec pq3 ext lo64)
jinja2: None
s3fs: 0.3.4
fastparquet: None
pandas_gbq: None
pandas_datareader: None
gcsfs: None