Skip to content

C extension bug / Uncaught in Python causing kernel to die #556

Open
@FnayouSeif

Description

@FnayouSeif

Describe the bug

I accidentally shared a connection object between threads in python3 ThreadPoolExecutor. I did not receive the error from my scripts because the execption is not caught by Python. It looks like something related to C. Sharing connection object works with psycopg2 or sqlite. This happens wity mysqlclient.

To Reproduce

Schema

Schema is generated by the script below. It is not very relevant.

Code

docker-compose -f docker-compose.yaml up --build

docker-compose.yaml

services:
  mysql:
    image: mysql:8
    container_name: 'sqlalchmey'
    ports:
      - 3311:3306
    environment:
      - MYSQL_ROOT_PASSWORD=python
      - MYSQL_PASSWORD=python
      - MYSQL_USER=python
      - MYSQL_DATABASE=python

python3 execute.py --share=1

execute.py

# execute.py
import sqlalchemy as sa
from concurrent.futures import ThreadPoolExecutor
import os
import logging 
import traceback
import argparse 

logger = logging.getLogger("Tester")

def create_table(engine:sa.engine.Engine):
    sql = "DROP TABLE IF EXISTS test"
    engine.connect().execute(sql)
    sql = "CREATE TABLE test (id int, bar INT);"
    engine.connect().execute(sql)
    return 


def insert(table_obj:str,record:tuple,conn:sa.engine.Connection,engine:sa.engine.Engine):
    ##generates insert query
    query = table_obj.insert().values(id=record[0],bar=record[1])
    if conn is None:
        conn = engine.connect()
    conn.execute(query)
    logger.info("Inserted Record...")
    return 

def run(share_connection=True):
    #tries to insert some records in the test table using threads
    engine = sa.create_engine("mysql://python:python@host.docker.internal:3311/python")
    create_table(engine=engine)
    metadata = sa.MetaData(bind=engine)

    table_obj = sa.Table('test', metadata, autoload=True)

    

    if share_connection:
        engine_param = None 
        conn = engine.connect()
    else:
        conn = None
        engine_param = engine
    
    records = [(i,i*2) for i in range(1,30)] 
    with ThreadPoolExecutor(max_workers=os.cpu_count()-1) as tpe:
        threads = [tpe.submit(insert,table_obj,record,conn,engine_param) for record in records]


    if conn is not None: conn.close()

def main():
    #wrapper for run
    try:
        parser = argparse.ArgumentParser()
        parser.add_argument('--share', default=False, type=int)
        share_connection = True if parser.parse_args().share==1 else False
        run(share_connection=share_connection)
        logger.info("success")
    except:
        logger.error(f"This now: {traceback.format_exc()}")

if __name__=='__main__':
    main()

requirements.txt

mysqlclient==2.1.1
SQLAlchemy==1.4.40
greenlet==1.1.3

To Reproduce

bash

python3 -m venv .venv.test 
source .venv.test/bin/activate 
pip install -r requirements.txt 
python3 execute.py --share=1

Output

free(): double free detected in tcache 2
free(): double free detected in tcache 2
Segmentation fault

Also sometimes:

free(): double free detected in tcache 2
double free or corruption (!prev)
Aborted

Environment

  • OS: Ubuntu 20.04 LTS (on WSL2)
  • Python: 3.8.10
  • SQLAlchemy: 1.4.40
  • Database: MySQL 8
  • DBAPI (eg: psycopg, cx_oracle, mysqlclient):mysqlclient==2.1.1

Additional context

This uncaught error kills jupyter notebook also without any tracebacks. That is how I discovered it.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions