Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Alex Grönholm
    @agronholm
    yup
    Alex Bender
    @alex-bender
    nice
    Safa Alfulaij
    @SafaAlfulaij

    Hi all.

    Using ORM.
    I have a O2M relationship that I set to lazy="raise" (addresses in User table).
    Now I want to use the relationship User.addresses in a query:
    result = await async_session.execute(User.addresses.statement)
    But it complains saying AttributeError: 'InstrumentedList' object has no attribute 'statement'

    I don't want to change the lazy so that I don't accidentally call it.

    Federico Caselli
    @CaselIT
    Don't know that that api is. Please see this for details on relationship loading https://docs.sqlalchemy.org/en/14/orm/loading_relationships.html?highlight=relationship+loading
    Safa Alfulaij
    @SafaAlfulaij

    Another question: How to joinedload and selectinload on the same table?
    Example:
    User - O2M - Addresses
    User - M2O (FK) - Group

    This doesn't work:
    select(User).options(joinedload(User.group).selectinload(User.addresses))

    Alex Grönholm
    @agronholm
    @SafaAlfulaij have you tried giving them as separate arguments to options()?
    chaining them indicates that you somehow want to load the "addresses" relationship in the related Group
    Federico Caselli
    @CaselIT

    select(User).options(joinedload(User.group).selectinload(User.addresses))

    you should not chain them since aggresses is not part of Group. they should be separated by comma, since are both "top level" (of user) select(User).options(joinedload(User.group), selectinload(User.addresses))

    Safa Alfulaij
    @SafaAlfulaij
    Thanks both, figured that after a while.
    I'm coming from Django ORM background, so please be patient with me :)
    Federico Caselli
    @CaselIT
    No problem!
    Will Shamblin
    @wcshamblin
    Hi guys, I have a problem that I'm not sure how to approach.
    I have 3 classes that all share some attributes - (id, entry time, owner, category) and I would like to be able to query them all at once.
    So say I need to search for class where entry time < datetime I could easily retrieve all classes where that would be the case. Not knowing much about sqlalchemy I figured the easiest solution would be an abstract class that would then be used for querying but it seems that is not supported?
    Right now I'm stuck because the only querying that seems to be supported is through one class at a time - session.query(class).filter() but I need to be able to query through multiple classes by their shared attributes.
    Federico Caselli
    @CaselIT
    usually that classes are in defined in an hierarchy of some kind, otherwise the orm would not know how what class to return for each row.
    Pascal Brückner
    @sylencecc
    Hello! Using async SQLAlchemy, let's assume I have all my relationships defined with lazy="raise", which would raise an exception each time I try to access the respective attribute. Is there a way to load such an "unloaded" relationship after I have already obtained the object from the ORM? According to the documentation, Session.refresh()doesn't automatically refresh unloaded relationships. How do I load the relationship, then? I'm aware of joinedload() & friends, but in case of a large object graph I would like to avoid eagerly loading the whole database with every single query.
    Federico Caselli
    @CaselIT
    there may be an api missing from refresh. In the meantime I believe you can use session.get with populate_existing=True since that takes a list of options where you can specify loader
    it's also in the "tip" box in the refresh documenation
    Pascal Brückner
    @sylencecc
    Thanks, I'll give that a try!
    wnesbv
    @wnesbv
    Good afternoon. Share your experience with those who are familiar with the situation: only the first line from the csv file is added..
            async with aiofiles.open(
                directory, mode="r", encoding="utf-8", newline=""
            ) as afp:
                async for new in AsyncDictReader(afp):
                    query = ScheduleService(
                        **{
                            "id": new["id"],
                            "name": new["name"],
                            "type": new["type"],
                            "title": new["title"],
                            "description": new["description"],
                            "date": datetime.strptime(
                                new["date"], settings.DATE
                            ).date(),
                            "there_is": datetime.strptime(
                                new["there_is"], settings.DATETIME_FORMAT
                            ),
                            "created_at": datetime.now(),
                            "sch_s_owner": new["sch_s_owner"],
                            "sch_s_service_id": new["sch_s_service_id"],
                        }
                    )
                    # ..
                    session.add_all(query)
                    session.refresh([query])
                    await session.commit()
    Anler
    @anler
    Hi all, I'm trying to follow the tutorial on caching (sqlalchemy version 1.3): https://docs.sqlalchemy.org/en/13/_modules/examples/dogpile_caching/caching_query.html but whenever I execute the code line: self.mergte_result(..., load=False) I get: AttributeError: '<MyModel>' object has no attribute '_sa_instance_state' does anyone know if this is me making a mistake/wrong-assumption or is the tutorial that is missing something?
    Jonathan Vanasco
    @jvanasco
    The example code changed heavily with the 1.4 release. I'm not sure if the 1.3 docs worked on the 1.3 version, or if they were from an earlier version. In any case, you should be using 1.4 as 1.3 is approaching End of Life
    jstevans
    @jstevans:matrix.org
    [m]
    Is there a way to perform the value transformations on my ORM object that happen when calling db.session.commit(), without needing an active DB connection?
    In my case, I have a class MyModel(sqlalchemy.Model) with type = db.Column(db.Text()) and I assign myModelObj.type = MyTypeEnum.MY_TYPE1. Before I do db.session.commit(), I see myModelObj.type == MyTypeEnum.MY_TYPE1 -- but after I do db.session.commit(), a transformation has happened and myModelObj.type == "MY_TYPE1"
    jstevans
    @jstevans:matrix.org
    [m]
    I believe what's happening is that SQLAlchemy triggers a read from the DB, by deleting the underlying state from my ORM object, which triggers a reload on the next state access, which seems to hit the DB
    Federico Caselli
    @CaselIT
    It seem to work as expected. If you want the enum in the column you should use the enum column
    upenbendre
    @upenbendre

    Hello SQLAlchemy community, I am hoping to get some guidance on the issue below:

    the metadata catalog tool Open MetaData https://docs.open-metadata.org/openmetadata/connectors/database/db2 is using SQLAlchemy to connect to a Db2 LUW database. The below packages are installed :
    apt-get install libxml2
    pip install "ibm-db==3.1.3"
    pip install "ibm-db-sa==0.3.8"
    But when it tries to connect, it leads to error below:
    Failed to test connection due to {"error": "Connection error from Engine(db2+ibm_db://dt224374:@db2databaseurl:50000/TPJ) - Unknown error connecting with Engine(db2+ibm_db://dt224374:*@db2databaseurl:50000/TPJ) - (ibm_db_dbi.ProgrammingError) ibm_db_dbi::ProgrammingError: SQLNumResultCols failed: [IBM][CLI Driver][DB2/AIX64] SQL0104N An unexpected token "END-OF-STATEMENT" was found following "1.3.1"} / SELECT 42". Expected tokens may include: "<table_expr>". SQLSTATE=42601 SQLCODE=-104\n[SQL: / {"app": "OpenMetadata", "version": "0.11.3.1"} /\nSELECT 42]\n(Background on this error at: https://sqlalche.me/e/14/f405)."}
    It is pointing to the SQLAlchemy page https://sqlalche.me/e/14/f405 but this does not help in identifying what the issue could be.

    If we use these packages ibm-db, ibm-db-sa directly in a Python script to connect to this DB2, we get the same error. It seems as though the generated query 'SELECT 42' here is wrong. DB2 expects that query to be 'SELECT 42 FROM SYSIBM.SYSDUMMY1;'

    Federico Caselli
    @CaselIT
    sqlalchemy does not support db2 directly so you should ask in the driver repository
    Tea Lin
    @bestteadahan

    Hello all.

    I met a runtime error with msg: "deque mutated during iteration" which comes from sqlalchemy/event/attr.py line 255. (with sqlalchemy ver. 1.4.39)

    The error shown since I added event listener on "before_commit" event as below:
    event.listen(my_scoped_session, "before_commit", my_func, propagate=True)

    But I have no idea how to solve this.

    Does anyone know the root cause?
    This is my first time to leave msg here, it will be appreciated if anyone knows the root cause.

    Spruha Pattnaik
    @_Spruha_twitter

    Hi folks, I'm facing a very high latency (200s/call) while doing INSERTs into a specific table in my DB. I did not face this issue while using Django ORM, but when I switched to SQLAlchemy ORM, it cropped up.

    Here's the code doing the inserts:

    class MyRepository():
        def __init__(self):
            _engine = create_engine(dburl, executemany_mode='batch', executemany_batch_page_size=10000, pool_size=20, max_overflow=10)
            _session_maker = sessionmaker(bind=_engine)
            self.session = _session_maker()
    
        def bulk_create_records(self, objects): #max 50 objects passed
            with self.session.begin():
                self.session.bulk_save_objects(objects, return_defaults=True)
                self.session.expunge_all()
            return objects

    Invocation: records = MyRepository().bulk_create_records(record_objects)

    I'm connecting to a PostgreSQL RDS instance & the highest wait events with these inserts ("INSERT into mytable (cola, colb, ...) VALUES (?, ?, ...) RETURNING mytable.id;") are caused by SubtransControlLock and subtrans events.

    What subtransactions are getting triggered here & how can I eliminate this issue?

    Federico Caselli
    @CaselIT
    @bestteadahan If you are adding an event inside another event, that's not supported, add the listener outside the callback. If that's not the case then it may be a bug of some kind. Please post it on GitHub with a reproduction case if you have one
    Tea Lin
    @bestteadahan
    @CaselIT Thanks for your advise. I finally use decorator @event.listen_for() instead of event.listen(), and the errors solved.
    nth2tell
    @nth2tell

    Hi, I'm new to this sql package. I'm trying to connect to the sql server with the following link

    connection_url = URL.create("mssql+pymssql://Username:Password@xx.103.xxx.xx/dbname")
    engine = db.create_engine(connection_url)
    connection = engine.connect()

    and get this error
    dialect, driver = name.split(".")
    ValueError: too many values to unpack (expected 2)

    Does anyone know what I did wrong?
    Alex Grönholm
    @agronholm
    @nth2tell why are you calling URL.create()?
    Federico Caselli
    @CaselIT
    The problem is that you are using the create method wrongly. I suggest you to refer to the documentation https://docs.sqlalchemy.org/en/14/core/engines.html?highlight=url+create#sqlalchemy.engine.URL.create
    To create an url from string you should use the make_url function instead
    Spruha Pattnaik
    @_Spruha_twitter
    Hi, Can someone please suggest the most efficient (in terms of DB latency) way to do bulk insert of objects & fetch the corresponding auto generated primary key?
    session.bulk_save_objects(objects, return_defaults=True) causes very high DB latency for the INSERT operation ....
    I also tried session.bulk_save_objects(objects) and then select the objects from DB using a hash key (which is indexed) but the select operations are also long running....
    Please help! It's a production issue ....
    Alex Grönholm
    @agronholm
    @_Spruha_twitter I think your latency problems are caused by something else
    it's not sqlalchemy's fault
    tvallois
    @tvallois:matrix.org
    [m]
    Hello, i try to use json_serializer and json_deserializer with the create_async_engine function but the workflow is never going into those functions when i try to insert/query a json column of a model. Is it only for psycopg2?
    Federico Caselli
    @CaselIT
    it should work also for async engines
    tvallois
    @tvallois:matrix.org
    [m]
    Hmm.. i wanted to try that because when i try to insert a dict in a JSON Postgres Column, i have an error "unhashable type: dict" however if i try to run json.dumps(my_dict) it works without any issue.
    If i understand the doc correctly, by default, engines do json.dumps and json.loads for serialization/deserialization.
    Federico Caselli
    @CaselIT
    yes, the issue here seems different though. without a stack trace it's hard to suggest anything
    tvallois
    @tvallois:matrix.org
    [m]
      File "/Users/thibaultvallois/etl-automation/service/etl_automation/persistors/base.py", line 32, in create_all
        async with self._get_session() as session:
      File "/Users/thibaultvallois/.asdf/installs/python/3.10.4/lib/python3.10/contextlib.py", line 206, in __aexit__
        await anext(self.gen)
      File "/Users/thibaultvallois/etl-automation/service/etl_automation/persistors/base.py", line 29, in _get_session
        raise err
      File "/Users/thibaultvallois/etl-automation/service/etl_automation/persistors/base.py", line 26, in _get_session
        await session.commit()
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/ext/asyncio/session.py", line 580, in commit
        return await greenlet_spawn(self.sync_session.commit)
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 128, in greenlet_spawn
        result = context.switch(value)
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1451, in commit
        self._transaction.commit(_to_root=self.future)
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 829, in commit
        self._prepare_impl()
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 808, in _prepare_impl
        self.session.flush()
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3383, in flush
        self._flush(objects)
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3522, in _flush
        with util.safe_reraise():
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
        compat.raise_(
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 208, in raise_
        raise exception
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3483, in _flush
        flush_context.execute()
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute
        rec.execute(self)
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py", line 630, in execute
        util.preloaded.orm_persistence.save_obj(
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/persistence.py", line 212, in save_obj
        for (
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/persistence.py", line 398, in _organize_states_for_save
        and instance_key in uowtransaction.session.identity_map
      File "/Users/thibaultvallois/Library/Caches/pypoetry/virtualenvs/etl-automation-GVE63ZMH-py3.10/lib/python3.10/site-packages/sqlalchemy/orm/identity.py", line 100, in __contains__
        if key in self._dict:
    TypeError: unhashable type: 'dict'

    ```class Postings(Base):
    stamp = Column(Date, primary_key=True)
    dat = Column(JSON, primary_key=True)

    postings = Postings(stamp=my_date, dat={"a": "b"})
    await async_session.create(postings)
    await async_session.commit()
    ```