Description
Pandas version checks
-
I have checked that this issue has not already been reported.
-
I have confirmed this bug exists on the latest version of pandas.
-
I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
This example requires an Oracle 19c database
engine = sqlalchemy.create_engine('oracle+oracledb://...', echo=True)
con = engine.connect()
c.execute(text('''
CREATE PRIVATE TEMPORARY TABLE ORA$PTT_TEMP (
a INT
) ON COMMIT DROP DEFINITION
'''))
pd.DataFrame({'a': [1]}).to_sql('ORA$PTT_TEMP', engine)
-05-09 11:10:00,967 INFO sqlalchemy.engine.Engine SELECT tables_and_views.table_name
FROM (SELECT a_tables.table_name AS table_name, a_tables.owner AS owner
FROM all_tables a_tables UNION ALL SELECT a_views.view_name AS table_name, a_views.owner AS owner
FROM all_views a_views) tables_and_views
WHERE tables_and_views.table_name = :table_name AND tables_and_views.owner = :owner
2025-05-09 11:10:00,967 INFO sqlalchemy.engine.Engine [cached since 533.2s ago] {'table_name': 'ORA$PTT_TEMP', 'owner': '...'}
2025-05-09 11:10:00,993 INFO sqlalchemy.engine.Engine
CREATE TABLE ORA$PTT_TEMP (
curve_id INT
)
DatabaseError: (oracledb.exceptions.DatabaseError) ORA-32463: cannot create an object with a name matching private temporary table prefix
Issue Description
Hello Pandas!
I am trying to use DataFrame.to_sql with Oracle "PRIVATE TEMPORARY" tables.
The catch is that these tables for whatever reason cannot be detected with the inspector.has_table() method, so pandas is trying to create the table, and then fails.
The issue is quite annoying, because the error is in the pandas.SQLDatabase.prep_table()
method, which is called unconditionally in the pandas.SQLDatabase.to_sql()
, and there is no way to override it with a custom "method: callable" parameter to pandas.DataFrame.to_sql()
.
Though one could argue that this is a bug in the SQLAlchemy Oracle dialect, rather than Pandas. But IMHO it should be possible to skip the table check and creation altogether in the pandas.DataFrame.to_sql()
call.
It looks like it would be easy to add a skip_table_creation: bool = False
argument to the to_sql()
method, that would just skip the prep_table call in SQLDatabase.to_sql().
The downside would be that pandas would not have the reflected information about target database types, but this could potentially be solved by passing a custom sqlalchemy.Table
object?
What do you think about this? Is this a direction that Pandas would like to go in, or do you think about the .to_sql()
method more as a handy feature for ad-hoc operations, that should not be used much in production? Do you think it is better to write my own insert methods and not rely on .to_sql()
for production use?
Expected Behavior
I expect that it will not try to create a table if it exists, or an option to skip table creation if I know that it does not exist.
Installed Versions
INSTALLED VERSIONS
commit : 0691c5c
python : 3.11.3
python-bits : 64
OS : Darwin
OS-release : 24.4.0
Version : Darwin Kernel Version 24.4.0: Fri Apr 11 18:33:47 PDT 2025; root:xnu-11417.101.15~117/RELEASE_ARM64_T6000
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : None
LOCALE : None.UTF-8
pandas : 2.2.3
numpy : 1.26.4
pytz : 2024.2
dateutil : 2.8.2
pip : 24.0
Cython : None
sphinx : None
IPython : 8.21.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2024.3.1
html5lib : 1.1
hypothesis : None
gcsfs : None
jinja2 : 3.1.3
lxml.etree : 5.1.0
matplotlib : 3.10.1
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.2
pandas_gbq : None
psycopg2 : 2.9.9
pymysql : 1.4.6
pyarrow : 15.0.0
pyreadstat : None
pytest : 8.3.3
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : 2.0.40
tables : None
tabulate : 0.9.0
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2024.1
qtpy : None
pyqt5 : None