Description
-
I have checked that this issue has not already been reported.
-
I have confirmed this bug exists on the latest version of pandas.
-
(optional) I have confirmed this bug exists on the master branch of pandas.
Code Sample
import datetime
import numpy as np
import pandas as pd
# This works fine:
pd.DataFrame([datetime.datetime(3015,1,1,0,0,0)])
# returned dtype is object, all good.
# But the same timestamp as a numpy object fails:
pd.DataFrame(np.array(['3015-01-01T00:00:00.000'], dtype='datetime64[ms]'))
# OutOfBoundsDatetime: Out of bounds nanosecond timestamp: 3015-01-01 00:00:00
Problem description
pd.DataFrame tries to infer dtypes, and this works for datetime.datetime objects. If they are within the datetime64[ns] limits, they are converted, while if they are outside, the dtype is kept as object
. For numpy datetime64 objects as input, the same flexibility is not in place, and pd.DataFrame() errors hard instead during auto-conversion.
Expected Output
dtype: object when pd.DataFrame is called on the numpy timestamps above
Output of pd.show_versions()
INSTALLED VERSIONS
commit : 7d32926
python : 3.8.5.final.0
python-bits : 64
OS : Linux
OS-release : 5.4.0-62-generic
Version : #70-Ubuntu SMP Tue Jan 12 12:45:47 UTC 2021
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.2.2
numpy : 1.19.5
pytz : 2019.3
dateutil : 2.8.1
pip : 21.0.1
setuptools : 44.0.0
Cython : None
pytest : 6.2.2
hypothesis : None
sphinx : 3.4.3
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.6.2
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.11.2
IPython : 7.19.0
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : 3.3.4
numexpr : None
odfpy : None
openpyxl : 3.0.6
pandas_gbq : None
pyarrow : None
pyxlsb : None
s3fs : None
scipy : 1.6.0
sqlalchemy : 1.3.22
tables : None
tabulate : None
xarray : 0.16.2
xlrd : 1.2.0
xlwt : None
numba : None