Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
    Federico Caselli
    No problem!
    Will Shamblin
    Hi guys, I have a problem that I'm not sure how to approach.
    I have 3 classes that all share some attributes - (id, entry time, owner, category) and I would like to be able to query them all at once.
    So say I need to search for class where entry time < datetime I could easily retrieve all classes where that would be the case. Not knowing much about sqlalchemy I figured the easiest solution would be an abstract class that would then be used for querying but it seems that is not supported?
    Right now I'm stuck because the only querying that seems to be supported is through one class at a time - session.query(class).filter() but I need to be able to query through multiple classes by their shared attributes.
    Federico Caselli
    usually that classes are in defined in an hierarchy of some kind, otherwise the orm would not know how what class to return for each row.
    Pascal Brückner
    Hello! Using async SQLAlchemy, let's assume I have all my relationships defined with lazy="raise", which would raise an exception each time I try to access the respective attribute. Is there a way to load such an "unloaded" relationship after I have already obtained the object from the ORM? According to the documentation, Session.refresh()doesn't automatically refresh unloaded relationships. How do I load the relationship, then? I'm aware of joinedload() & friends, but in case of a large object graph I would like to avoid eagerly loading the whole database with every single query.
    Federico Caselli
    there may be an api missing from refresh. In the meantime I believe you can use session.get with populate_existing=True since that takes a list of options where you can specify loader
    it's also in the "tip" box in the refresh documenation
    Pascal Brückner
    Thanks, I'll give that a try!
    Good afternoon. Share your experience with those who are familiar with the situation: only the first line from the csv file is added..
            async with aiofiles.open(
                directory, mode="r", encoding="utf-8", newline=""
            ) as afp:
                async for new in AsyncDictReader(afp):
                    query = ScheduleService(
                            "id": new["id"],
                            "name": new["name"],
                            "type": new["type"],
                            "title": new["title"],
                            "description": new["description"],
                            "date": datetime.strptime(
                                new["date"], settings.DATE
                            "there_is": datetime.strptime(
                                new["there_is"], settings.DATETIME_FORMAT
                            "created_at": datetime.now(),
                            "sch_s_owner": new["sch_s_owner"],
                            "sch_s_service_id": new["sch_s_service_id"],
                    # ..
                    await session.commit()
    Hi all, I'm trying to follow the tutorial on caching (sqlalchemy version 1.3): https://docs.sqlalchemy.org/en/13/_modules/examples/dogpile_caching/caching_query.html but whenever I execute the code line: self.mergte_result(..., load=False) I get: AttributeError: '<MyModel>' object has no attribute '_sa_instance_state' does anyone know if this is me making a mistake/wrong-assumption or is the tutorial that is missing something?
    Jonathan Vanasco
    The example code changed heavily with the 1.4 release. I'm not sure if the 1.3 docs worked on the 1.3 version, or if they were from an earlier version. In any case, you should be using 1.4 as 1.3 is approaching End of Life
    Is there a way to perform the value transformations on my ORM object that happen when calling db.session.commit(), without needing an active DB connection?
    In my case, I have a class MyModel(sqlalchemy.Model) with type = db.Column(db.Text()) and I assign myModelObj.type = MyTypeEnum.MY_TYPE1. Before I do db.session.commit(), I see myModelObj.type == MyTypeEnum.MY_TYPE1 -- but after I do db.session.commit(), a transformation has happened and myModelObj.type == "MY_TYPE1"
    I believe what's happening is that SQLAlchemy triggers a read from the DB, by deleting the underlying state from my ORM object, which triggers a reload on the next state access, which seems to hit the DB
    Federico Caselli
    It seem to work as expected. If you want the enum in the column you should use the enum column

    Hello SQLAlchemy community, I am hoping to get some guidance on the issue below:

    the metadata catalog tool Open MetaData https://docs.open-metadata.org/openmetadata/connectors/database/db2 is using SQLAlchemy to connect to a Db2 LUW database. The below packages are installed :
    apt-get install libxml2
    pip install "ibm-db==3.1.3"
    pip install "ibm-db-sa==0.3.8"
    But when it tries to connect, it leads to error below:
    Failed to test connection due to {"error": "Connection error from Engine(db2+ibm_db://dt224374:@db2databaseurl:50000/TPJ) - Unknown error connecting with Engine(db2+ibm_db://dt224374:*@db2databaseurl:50000/TPJ) - (ibm_db_dbi.ProgrammingError) ibm_db_dbi::ProgrammingError: SQLNumResultCols failed: [IBM][CLI Driver][DB2/AIX64] SQL0104N An unexpected token "END-OF-STATEMENT" was found following "1.3.1"} / SELECT 42". Expected tokens may include: "<table_expr>". SQLSTATE=42601 SQLCODE=-104\n[SQL: / {"app": "OpenMetadata", "version": ""} /\nSELECT 42]\n(Background on this error at: https://sqlalche.me/e/14/f405)."}
    It is pointing to the SQLAlchemy page https://sqlalche.me/e/14/f405 but this does not help in identifying what the issue could be.

    If we use these packages ibm-db, ibm-db-sa directly in a Python script to connect to this DB2, we get the same error. It seems as though the generated query 'SELECT 42' here is wrong. DB2 expects that query to be 'SELECT 42 FROM SYSIBM.SYSDUMMY1;'

    Federico Caselli
    sqlalchemy does not support db2 directly so you should ask in the driver repository
    Tea Lin

    Hello all.

    I met a runtime error with msg: "deque mutated during iteration" which comes from sqlalchemy/event/attr.py line 255. (with sqlalchemy ver. 1.4.39)

    The error shown since I added event listener on "before_commit" event as below:
    event.listen(my_scoped_session, "before_commit", my_func, propagate=True)

    But I have no idea how to solve this.

    Does anyone know the root cause?
    This is my first time to leave msg here, it will be appreciated if anyone knows the root cause.

    Spruha Pattnaik

    Hi folks, I'm facing a very high latency (200s/call) while doing INSERTs into a specific table in my DB. I did not face this issue while using Django ORM, but when I switched to SQLAlchemy ORM, it cropped up.

    Here's the code doing the inserts:

    class MyRepository():
        def __init__(self):
            _engine = create_engine(dburl, executemany_mode='batch', executemany_batch_page_size=10000, pool_size=20, max_overflow=10)
            _session_maker = sessionmaker(bind=_engine)
            self.session = _session_maker()
        def bulk_create_records(self, objects): #max 50 objects passed
            with self.session.begin():
                self.session.bulk_save_objects(objects, return_defaults=True)
            return objects

    Invocation: records = MyRepository().bulk_create_records(record_objects)

    I'm connecting to a PostgreSQL RDS instance & the highest wait events with these inserts ("INSERT into mytable (cola, colb, ...) VALUES (?, ?, ...) RETURNING mytable.id;") are caused by SubtransControlLock and subtrans events.

    What subtransactions are getting triggered here & how can I eliminate this issue?

    Federico Caselli
    @bestteadahan If you are adding an event inside another event, that's not supported, add the listener outside the callback. If that's not the case then it may be a bug of some kind. Please post it on GitHub with a reproduction case if you have one
    Tea Lin
    @CaselIT Thanks for your advise. I finally use decorator @event.listen_for() instead of event.listen(), and the errors solved.

    Hi, I'm new to this sql package. I'm trying to connect to the sql server with the following link

    connection_url = URL.create("mssql+pymssql://Username:Password@xx.103.xxx.xx/dbname")
    engine = db.create_engine(connection_url)
    connection = engine.connect()

    and get this error
    dialect, driver = name.split(".")
    ValueError: too many values to unpack (expected 2)

    Does anyone know what I did wrong?
    Alex Grönholm
    @nth2tell why are you calling URL.create()?
    Federico Caselli
    The problem is that you are using the create method wrongly. I suggest you to refer to the documentation https://docs.sqlalchemy.org/en/14/core/engines.html?highlight=url+create#sqlalchemy.engine.URL.create
    To create an url from string you should use the make_url function instead
    Spruha Pattnaik
    Hi, Can someone please suggest the most efficient (in terms of DB latency) way to do bulk insert of objects & fetch the corresponding auto generated primary key?
    session.bulk_save_objects(objects, return_defaults=True) causes very high DB latency for the INSERT operation ....
    I also tried session.bulk_save_objects(objects) and then select the objects from DB using a hash key (which is indexed) but the select operations are also long running....
    Please help! It's a production issue ....
    Alex Grönholm
    @_Spruha_twitter I think your latency problems are caused by something else
    it's not sqlalchemy's fault
    Hello, i try to use json_serializer and json_deserializer with the create_async_engine function but the workflow is never going into those functions when i try to insert/query a json column of a model. Is it only for psycopg2?
    Federico Caselli
    it should work also for async engines
    Hmm.. i wanted to try that because when i try to insert a dict in a JSON Postgres Column, i have an error "unhashable type: dict" however if i try to run json.dumps(my_dict) it works without any issue.
    If i understand the doc correctly, by default, engines do json.dumps and json.loads for serialization/deserialization.
    Federico Caselli
    yes, the issue here seems different though. without a stack trace it's hard to suggest anything
      File "/Users/thibaultvallois/etl-automation/service/etl_automation/persistors/base.py", line 32, in create_all
        async with self._get_session() as session:
      File "/Users/thibaultvallois/.asdf/installs/python/3.10.4/lib/python3.10/contextlib.py", line 206, in __aexit__
        await anext(self.gen)
      File "/Users/thibaultvallois/etl-automation/service/etl_automation/persistors/base.py", line 29, in _get_session
        raise err
      File "/Users/thibaultvallois/etl-automation/service/etl_automation/persistors/base.py", line 26, in _get_session
        await session.commit()
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/ext/asyncio/session.py", line 580, in commit
        return await greenlet_spawn(self.sync_session.commit)
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 128, in greenlet_spawn
        result = context.switch(value)
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1451, in commit
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 829, in commit
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 808, in _prepare_impl
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3383, in flush
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3522, in _flush
        with util.safe_reraise():
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 208, in raise_
        raise exception
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3483, in _flush
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py", line 630, in execute
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/persistence.py", line 212, in save_obj
        for (
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/persistence.py", line 398, in _organize_states_for_save
        and instance_key in uowtransaction.session.identity_map
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/identity.py", line 100, in __contains__
        if key in self._dict:
    TypeError: unhashable type: 'dict'

    ```class Postings(Base):
    stamp = Column(Date, primary_key=True)
    dat = Column(JSON, primary_key=True)

    postings = Postings(stamp=my_date, dat={"a": "b"})
    await async_session.create(postings)
    await async_session.commit()

    Federico Caselli
    are you trying to use a json column as primary key?
    that's not supported
    ty @CaselIT its obvious now
    Federico Caselli
    in the orm, with core you can do anything of course :)
    but the orm uses a dict to associate key -> orm instance, so the key needs to be hashable
    Jonathan Vanasco
    I am writing missing tests for an old sqlalchemy app, and ran into this issue- the app was built against an existing database and doesn't have any of the indexes on it. The basic unique indexes are straightforward. How should I implement "CREATE UNIQUE INDEX uidx_foo ON foo(lower(name))" with table_args ?
    Is it just DDL(raw_sql), or is there a way to leverage UniqueIndex, as I likely have some fkeys pointing to these?
    I meant UniqueConstraint