Skip to content

BUG: Memory leak on v1.5.0 when using pd.to_datetime() #49164

Closed
@morteymike

Description

@morteymike

Pandas version checks

  • I have checked that this issue has not already been reported.

  • I have confirmed this bug exists on the latest version of pandas.

  • I have confirmed this bug exists on the main branch of pandas.

Reproducible Example

import pandas as pd
import numpy as np
import time

big_num = 30_000_000
num_loops = 5_000_000
for i in range(num_loops):
    time.sleep(.1)
    d = {
        'date_col': np.random.choice(pd.date_range('1970-10-01', '2021-10-31'), big_num), 
        'another_col': range(big_num)
    }

    for y in range(0, big_num, 20):
        d['date_col'][y] = None

    df = pd.DataFrame(d)

    df['date_col'] = df['date_col'].astype(object).where(df['date_col'].notnull(), None)
    df['date_col'] = pd.to_datetime(df['date_col'], infer_datetime_format=True)

    df = None

Issue Description

There seems to be a memory leak in the latest 1.5.0 release when using pd.to_datetime() on a column. I triaged my production code to the pd.to_datetime() line and can verify that this line is the cause. My code loops over a function similar to this with a different df several hundred times, so I've created an example here to help replicate my situation.

Running the above code on my personal machine, I can see a significant difference in memory consumption between 1.4.0 and 1.5.0 (I haven't tested any version between 1.4.0 and 1.5.0). However, my machine does appear to run the garbage collector and reduce memory periodically (something the production Docker container does not do with 1.5.0, but does do with 1.4.0).

I've provided an example above, which is a stripped down version of my production application. The process quickly consumes all available memory in my production Docker container and is Killed by the host machine with 1.5.0. Reverting to 1.4.0, the application does not have a memory leak, and everything else is identical.

Expected Behavior

1.4.0 and 1.5.0 memory consumption should be more or less identical when using pd.to_datetime(). There should not be a memory leak on 1.5.0 using this function.

Installed Versions

INSTALLED VERSIONS

commit : 87cfe4e
python : 3.10.6.final.0
python-bits : 64
OS : Darwin
OS-release : 21.6.0
Version : Darwin Kernel Version 21.6.0: Wed Aug 10 14:28:23 PDT 2022; root:xnu-8020.141.5~2/RELEASE_ARM64_T6000
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8

pandas : 1.5.0
numpy : 1.23.4
pytz : 2022.4
dateutil : 2.8.2
setuptools : 63.4.3
pip : 22.2.2
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
brotli : None
fastparquet : None
fsspec : None
gcsfs : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : None
snappy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
zstandard : None
tzdata : None

Metadata

Metadata

Assignees

No one assigned

    Labels

    Timestamppd.Timestamp and associated methods

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions