Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
    Jonathan Vanasco
    The example code changed heavily with the 1.4 release. I'm not sure if the 1.3 docs worked on the 1.3 version, or if they were from an earlier version. In any case, you should be using 1.4 as 1.3 is approaching End of Life
    Is there a way to perform the value transformations on my ORM object that happen when calling db.session.commit(), without needing an active DB connection?
    In my case, I have a class MyModel(sqlalchemy.Model) with type = db.Column(db.Text()) and I assign myModelObj.type = MyTypeEnum.MY_TYPE1. Before I do db.session.commit(), I see myModelObj.type == MyTypeEnum.MY_TYPE1 -- but after I do db.session.commit(), a transformation has happened and myModelObj.type == "MY_TYPE1"
    I believe what's happening is that SQLAlchemy triggers a read from the DB, by deleting the underlying state from my ORM object, which triggers a reload on the next state access, which seems to hit the DB
    Federico Caselli
    It seem to work as expected. If you want the enum in the column you should use the enum column

    Hello SQLAlchemy community, I am hoping to get some guidance on the issue below:

    the metadata catalog tool Open MetaData https://docs.open-metadata.org/openmetadata/connectors/database/db2 is using SQLAlchemy to connect to a Db2 LUW database. The below packages are installed :
    apt-get install libxml2
    pip install "ibm-db==3.1.3"
    pip install "ibm-db-sa==0.3.8"
    But when it tries to connect, it leads to error below:
    Failed to test connection due to {"error": "Connection error from Engine(db2+ibm_db://dt224374:@db2databaseurl:50000/TPJ) - Unknown error connecting with Engine(db2+ibm_db://dt224374:*@db2databaseurl:50000/TPJ) - (ibm_db_dbi.ProgrammingError) ibm_db_dbi::ProgrammingError: SQLNumResultCols failed: [IBM][CLI Driver][DB2/AIX64] SQL0104N An unexpected token "END-OF-STATEMENT" was found following "1.3.1"} / SELECT 42". Expected tokens may include: "<table_expr>". SQLSTATE=42601 SQLCODE=-104\n[SQL: / {"app": "OpenMetadata", "version": ""} /\nSELECT 42]\n(Background on this error at: https://sqlalche.me/e/14/f405)."}
    It is pointing to the SQLAlchemy page https://sqlalche.me/e/14/f405 but this does not help in identifying what the issue could be.

    If we use these packages ibm-db, ibm-db-sa directly in a Python script to connect to this DB2, we get the same error. It seems as though the generated query 'SELECT 42' here is wrong. DB2 expects that query to be 'SELECT 42 FROM SYSIBM.SYSDUMMY1;'

    Federico Caselli
    sqlalchemy does not support db2 directly so you should ask in the driver repository
    Tea Lin

    Hello all.

    I met a runtime error with msg: "deque mutated during iteration" which comes from sqlalchemy/event/attr.py line 255. (with sqlalchemy ver. 1.4.39)

    The error shown since I added event listener on "before_commit" event as below:
    event.listen(my_scoped_session, "before_commit", my_func, propagate=True)

    But I have no idea how to solve this.

    Does anyone know the root cause?
    This is my first time to leave msg here, it will be appreciated if anyone knows the root cause.

    Spruha Pattnaik

    Hi folks, I'm facing a very high latency (200s/call) while doing INSERTs into a specific table in my DB. I did not face this issue while using Django ORM, but when I switched to SQLAlchemy ORM, it cropped up.

    Here's the code doing the inserts:

    class MyRepository():
        def __init__(self):
            _engine = create_engine(dburl, executemany_mode='batch', executemany_batch_page_size=10000, pool_size=20, max_overflow=10)
            _session_maker = sessionmaker(bind=_engine)
            self.session = _session_maker()
        def bulk_create_records(self, objects): #max 50 objects passed
            with self.session.begin():
                self.session.bulk_save_objects(objects, return_defaults=True)
            return objects

    Invocation: records = MyRepository().bulk_create_records(record_objects)

    I'm connecting to a PostgreSQL RDS instance & the highest wait events with these inserts ("INSERT into mytable (cola, colb, ...) VALUES (?, ?, ...) RETURNING mytable.id;") are caused by SubtransControlLock and subtrans events.

    What subtransactions are getting triggered here & how can I eliminate this issue?

    Federico Caselli
    @bestteadahan If you are adding an event inside another event, that's not supported, add the listener outside the callback. If that's not the case then it may be a bug of some kind. Please post it on GitHub with a reproduction case if you have one
    Tea Lin
    @CaselIT Thanks for your advise. I finally use decorator @event.listen_for() instead of event.listen(), and the errors solved.

    Hi, I'm new to this sql package. I'm trying to connect to the sql server with the following link

    connection_url = URL.create("mssql+pymssql://Username:Password@xx.103.xxx.xx/dbname")
    engine = db.create_engine(connection_url)
    connection = engine.connect()

    and get this error
    dialect, driver = name.split(".")
    ValueError: too many values to unpack (expected 2)

    Does anyone know what I did wrong?
    Alex Grönholm
    @nth2tell why are you calling URL.create()?
    Federico Caselli
    The problem is that you are using the create method wrongly. I suggest you to refer to the documentation https://docs.sqlalchemy.org/en/14/core/engines.html?highlight=url+create#sqlalchemy.engine.URL.create
    To create an url from string you should use the make_url function instead
    Spruha Pattnaik
    Hi, Can someone please suggest the most efficient (in terms of DB latency) way to do bulk insert of objects & fetch the corresponding auto generated primary key?
    session.bulk_save_objects(objects, return_defaults=True) causes very high DB latency for the INSERT operation ....
    I also tried session.bulk_save_objects(objects) and then select the objects from DB using a hash key (which is indexed) but the select operations are also long running....
    Please help! It's a production issue ....
    Alex Grönholm
    @_Spruha_twitter I think your latency problems are caused by something else
    it's not sqlalchemy's fault
    Hello, i try to use json_serializer and json_deserializer with the create_async_engine function but the workflow is never going into those functions when i try to insert/query a json column of a model. Is it only for psycopg2?
    Federico Caselli
    it should work also for async engines
    Hmm.. i wanted to try that because when i try to insert a dict in a JSON Postgres Column, i have an error "unhashable type: dict" however if i try to run json.dumps(my_dict) it works without any issue.
    If i understand the doc correctly, by default, engines do json.dumps and json.loads for serialization/deserialization.
    Federico Caselli
    yes, the issue here seems different though. without a stack trace it's hard to suggest anything
      File "/Users/thibaultvallois/etl-automation/service/etl_automation/persistors/base.py", line 32, in create_all
        async with self._get_session() as session:
      File "/Users/thibaultvallois/.asdf/installs/python/3.10.4/lib/python3.10/contextlib.py", line 206, in __aexit__
        await anext(self.gen)
      File "/Users/thibaultvallois/etl-automation/service/etl_automation/persistors/base.py", line 29, in _get_session
        raise err
      File "/Users/thibaultvallois/etl-automation/service/etl_automation/persistors/base.py", line 26, in _get_session
        await session.commit()
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/ext/asyncio/session.py", line 580, in commit
        return await greenlet_spawn(self.sync_session.commit)
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 128, in greenlet_spawn
        result = context.switch(value)
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1451, in commit
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 829, in commit
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 808, in _prepare_impl
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3383, in flush
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3522, in _flush
        with util.safe_reraise():
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 208, in raise_
        raise exception
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3483, in _flush
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py", line 630, in execute
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/persistence.py", line 212, in save_obj
        for (
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/persistence.py", line 398, in _organize_states_for_save
        and instance_key in uowtransaction.session.identity_map
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/identity.py", line 100, in __contains__
        if key in self._dict:
    TypeError: unhashable type: 'dict'

    ```class Postings(Base):
    stamp = Column(Date, primary_key=True)
    dat = Column(JSON, primary_key=True)

    postings = Postings(stamp=my_date, dat={"a": "b"})
    await async_session.create(postings)
    await async_session.commit()

    Federico Caselli
    are you trying to use a json column as primary key?
    that's not supported
    ty @CaselIT its obvious now
    Federico Caselli
    in the orm, with core you can do anything of course :)
    but the orm uses a dict to associate key -> orm instance, so the key needs to be hashable
    Jonathan Vanasco
    I am writing missing tests for an old sqlalchemy app, and ran into this issue- the app was built against an existing database and doesn't have any of the indexes on it. The basic unique indexes are straightforward. How should I implement "CREATE UNIQUE INDEX uidx_foo ON foo(lower(name))" with table_args ?
    Is it just DDL(raw_sql), or is there a way to leverage UniqueIndex, as I likely have some fkeys pointing to these?
    I meant UniqueConstraint
    Jonathan Vanasco
    Index("uidx", text("lower(name)"), unique=True) seems to work
    Mohi Mirza
    I am trying to use Attribute Events to create Audit for association tables in many to many relationships. In Mapper Events such as "after_insert" we get same same Connection object which can be used to create Audit for the Mapper Object in some Audit Table. In Attribute Events we don't get it. Is there a way I can get the Connection object from Attribute Events such as "append" and "remove" . I do not want to pass the Connection object manually to keep my ManytoManyAuditMixin class portable.
    Alex Grönholm
    @mohimirza we use after_flush + after_commit events for that purpose
    in after_flush we use history to determine which attributes were changed
    Spruha Pattnaik
    Hi folks, can I convert this raw SQL into ORM syntax?
    message_ids = conn.execute(f"SELECT nextval('messages_id_seq') as id FROM generate_series(1, {num_msgs})")
    I've observed that session.bulk_save_objects(object_list, returning_defaults=True) doesn't scale up to highly concurrent inserts - it causes some latency. Therefore i'm trying to implement the above where I generate the defaults, i.e. id field, beforehand.
    What is your opinion on this? And what would be the best practice to scale up performance if i want to inserts objects into a model & fetch the same objects back with the id field.
    not sure what you mean by highly concurrent inserts though
    Yang Bo
    Hi, is there a way to set multi-valued index in table_args? so that albemic could detect this index creation?
    Federico Caselli
    what do you mean by multi-valued?
    multi column?