Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Alex Grönholm
    @agronholm
    backrefs are evil
    Ryan Tin Wei Jie
    @ultratin
    It really is
    But coming from a rails activerecord background, sqlalchemy is very very different from what I expected
    dunderstr aka seimen
    @cloasdata

    how to bypass user init method? The dunder new trick does not work

    import sqlalchemy
    from sqlalchemy import inspect, create_engine, Column
    from sqlalchemy.pool import SingletonThreadPool
    from sqlalchemy.orm import declarative_base, Session
    from sqlalchemy.orm import validates
    from sqlalchemy import Integer, Text
    
    Base = declarative_base()
    
    
    class TableValidator(Base):
        __tablename__ = "table_validator"
        id = Column(Integer, primary_key=True)
        test_column = Column(Text)
    
        def __init__(self, test_column):
            self.test_column = test_column
    
        @validates("test_column")
        def validate(self, key, value):
            if value is None:
                raise ValueError
            else:
                return value
    
    s = Session(create_engine(f"sqlite://",
                               connect_args={'check_same_thread': False},
                               poolclass=SingletonThreadPool, echo=False))
    empty = TableValidator.__new__(TableValidator)
    s.add(empty)
    s.commit()

    throws:

    sqlalchemy.orm.exc.UnmappedInstanceError: Class '__main__.TableValidator' is mapped, but this instance lacks instrumentation.  This occurs when the instance is created before sqlalchemy.orm.mapper(__main__.TableValidator) was called.
    Federico Caselli
    @CaselIT
    why you need to bypass __init__?
    dunderstr aka seimen
    @cloasdata
    for example to bypass the validating action. Id just need an "bare" object to map it to something else
    I could make an additional classmethod as alternate "constructor". In that i could switch of the validating action off. However, this needs a implementation, and for convenience I wanted to avoid this.
    Shefeek N
    @nshefeek

    Hi all,

    It has been only a while since I've started tinkering with SQLAlchemy. I aws working on a particular requirement of mine, where I had 4 types of users with login functionality. But each type has different attributes.
    The user types are : Client, Brand, Merchant, Customer. All these types were supposed to have login access to the system. I created a parent model called User and other models like Client, Brand, Merchant and Customer inherits from the User model. I implemented it using joined table inheritance as per the documentation.

    ============================================================

    User Model

    class User(UserMixin, BaseModel):
        __tablename__ = "users"
        id = Column(Integer, primary_key=True, autoincrement=True, unique=True)
        email = Column(String, unique=True, nullable=False, index=True)
        confirmed = Column(Boolean, default=False, nullable=False)
        role = Column(Enum(Role))
        __mapper_args__ = {
            "polymorphic_on": role,
        }

    =============================================================

    Client Model

    class Client(User):
        __tablename__ = "clients"
        id = Column(Integer, ForeignKey("users.id"), primary_key=True)
        __mapper_args__ = {
            "polymorphic_identity": "client",
            'inherit_condition': (id == User.id)
        }
        client_data = (...)
        .....

    ================================================================

    Brand Model

    class Brand(User):
        __tablename__ = "brands"
        id = Column(Integer, ForeignKey("users.id"), primary_key=True)
        __mapper_args__ = {
            "polymorphic_identity": "brand",
            'inherit_condition': id == User.id
        }
        brand_data = (...)
        .....

    Then I created an User object, which is to serve as the admin user. It works fine and well.
    But the problem is when I'm creating other user types. For eg: when I'm creating a Client object, a row is created in the Client table as well as the User table. I am able to login with the client user. But the problem is when I'm trying to access the Client object's attributes. And the object type returned while querying the Client is of type User.

    >>> from src.models.models import User
    >>> User.get_all()
    [<User 63: xxxxxxx>, <User 7: xxxxxxx>, <User 67: xxxxxxxx>, <User 13: xxxxxxxx>, <User 69: xxxxxxxxx>, <User 72: xxxxxxxx>]
    >>> from src.models.models import Client
    >>>
    >>> Client.get_all()
    [<User 7: xxxxxx>, <User 67: xxxxxxx>, <User 69: xxxxxx>]
    >>>

    But if I'm making the query on Client object before loading the User, it returns fine.

    >>> from src.models.models import Client
    >>> Client.get_all()
    [<Client xxxxx>, <Client xxxxxxx>, <Client xxxxxx>]
    >>>

    I feel like I'm missing out on something?

    Federico Caselli
    @CaselIT
    what is get_all?
    1 reply
    Asish Binu Mathew
    @kernelguardian_twitter
    is there a specific group for discussing about alembic?
    mike bayer
    @zzzeek
    there's a google group but these days using github discussions is more efficient
    Asish Binu Mathew
    @kernelguardian_twitter
    github it is then
    Federico Caselli
    @CaselIT
    for quick questions you can also use this channel
    Anton Melser
    @AntonOfTheWoods

    I am struggling with some async session/connection issues with asyncpg with 1.4.26. I have the following:

    async_session: sessionmaker = sessionmaker(
        bind=engine,
        class_=AsyncSession,
        autocommit=False,
        autoflush=False,
        expire_on_commit=False,
    )
    ...
    
    async with async_session() as db:
        ... 
        an_obj.update_list_words(db, myman)
        ...
    
    async def update_list_words(
        self, db: AsyncSession, manager: EnrichmentManager
    ):
        ...
        for freq, word in word_list:
            data.append(UserListDict(word=word, import_frequency=int(freq)))
    
        temp_table = f"import_{self.id}".replace("-", "_")  # self.id is a GUID
        sql = f"""CREATE TEMP TABLE {temp_table} (word text, import_frequency int, word_id int null, freq float null)"""
        await db.execute(text(sql))
        await db.execute(text( f"""INSERT INTO {temp_table} (word, import_frequency) values (:word, :import_frequency) """), data,)
        await db.execute(text(
                f"""UPDATE {temp_table}
                SET word_id = mitable.id
                FROM mitable
                WHERE word = mitable.source_text
                    and mitable.from_lang = :from_lang
                    and mitable.to_lang = :to_lang"""
            ),
            {"from_lang": manager.from_lang, "to_lang": manager.to_lang},
        )
        ...
        await db.execute(text(f"DROP TABLE {temp_table}"))
        await db.commit()

    I am getting some strange behaviour that I just can't understand. For example, I just got a 'asyncpg.exceptions.UndefinedTableError'>: relation "import_19fd0dbb_0321_4124_a810_22a4efee7ab1" does not exist when it gets to the UPDATE which I am struggling to understand how it's possible. Any pointers? It works sometimes, typically the first time it gets called. Then it fails with a few different errors.

    Federico Caselli
    @CaselIT
    that seems indeed strange. do you have the same issue by using asyncpg directly?
    feel tree to open a discussion or issue
    2 replies
    Anton Melser
    @AntonOfTheWoods
    There is some slight strangeness with asyncpg directly also... I am seeing this on my "production" platform, which was trying to use db replication with pgpool. I reduced the number of replicas to 1 (like my preprod) to try and debug but some strangeness continues. The issue with preprod is that all services are on the same machine, whereas the prod are on different VMs over a flakey (Contabo) network (though pgpool and the db instance are/were on the same machine).
    Alex Grönholm
    @agronholm
    @AntonOfTheWoods does it happen even w/o pgpool?
    2 replies
    @keirono:matrix.keiron.xyz
    @keirono:matrix.keiron.xyz
    [m]
    Hi everyone, I'm sorry about this but I badly need some assistance. I've been working on this problem all day and I'm completely lost. I led the development of a project, and the new ownership has done something that causes alembic to fail create a database. I've used GitHub actions to reproduce the error here: https://github.com/AberystwythSystemsBiology/limbus/runs/4156561399?check_suite_focus=true
    I am at a complete loss as to how to fix this.
    Jonathan Vanasco
    @jvanasco
    Can you point to the alembic migrations and how they're executed? Can you reference a previously working version?
    Lorenzo Rutayisire
    @loryruta
    Hi, I'm getting really confused with sessions: I have a table called "operations", then I have a worker that takes 100 operations, executes them and sets a field processed_at, then takes the next 100 operations. The code is something like this:
    with Session.begin() as session:
         while has_remaining_operations():
             operations = session.query(Operation) \
                            .filter(
                                Operation.id_user == user.id and
                                Operation.processed_at is None
                            ) \
                            .order_by(Operation.created_at.asc()) \
                            .limit(100) \
                            .all()
    
             for op in operations:
                  # code to execute
                  op.processed_at = datetime.now().isoformat()
    Here's the pseudo-code of what I'm doing
    The issue is the same operations are executed more than once, like the processed_at modification doesn't get applied when session.query is executed
    Lorenzo Rutayisire
    @loryruta
    maybe should I commit before re-doing the query? but if an error is found in a single operation I'd like to reverse the whole process...
    mikenerone
    @mikenerone:matrix.org
    [m]
    @loryruta: I'm no expert, and I'm half shooting in the dark, but two approaches come to mind:
    1. Don't repeat the query - have just one query and fetch the results in chunked fashion (https://docs.sqlalchemy.org/en/14/orm/query.html#sqlalchemy.orm.Query.yield_per).
    2. Since you're ordered by Operation.created_at, grab the last one from each chunk and add a WHERE Operation.created_at > last_created_at_from_previous_chunk.
    Lorenzo Rutayisire
    @loryruta
    I would firstly like to understand why my solution isn't working before trying workarounds...
    Jonathan Vanasco
    @jvanasco
    call session.flush() at the end of the loop
    Lorenzo Rutayisire
    @loryruta
    session.flush(objects=[op]) soon after op.processed_at = ... right?
    Jonathan Vanasco
    @jvanasco
    In your example, you're modifying the objects in Python but not updating the database
    Lorenzo Rutayisire
    @loryruta
    I've already tried adding flush after setting processed_at and I was still getting the same result...
    Jonathan Vanasco
    @jvanasco
    session.flush() should work
    at the same indent leve as the for loop, right after it
    level
    Lorenzo Rutayisire
    @loryruta
    I found the actual issue...
    yeah, flush is required however it continued giving me wrong results
    the issue was the filter clause. It should have looked like this:
                            .filter(and_(
                                SavedOp.id_user == user.id,
                                SavedOp.processed_at.is_(None)
                            )) \
    Alex Grönholm
    @agronholm
    that would not be required if you used a single chunking query methinks
    Jonathan Vanasco
    @jvanasco
    I didn't catch that. You can also do SavedOp.processed_at.op("is")(None)
    but is_(None) is the correct form
    Lorenzo Rutayisire
    @loryruta
    nope, I can't chunk
    Saltymakov Timofey
    @Actticus
    Hi. How can I do async bulk update by one statement? Can't find any information about it in docs
    await session.execute(
                    sqlalchemy.update(
                        some_table
                    ).where(
                        some_table.id.in_(ids)
                    ).values(
                        [{'a':.., ...}, {'a':.., ...},...]
                    ),
                )
    2 replies
    I'm tried something like that but catch UPDATE construct does not support multiple parameter sets.
    Alex Grönholm
    @agronholm
    is that from sqlalchemy itself or the rdbms?
    Alex Grönholm
    @agronholm
    why am I getting TypeError: dict is not a sequence when running engine.execute(...)? I'm not even passing it any parameters, just a literal SQL UPDATE statement
    it's crashing at cursor.execute(statement, parameters) where cursor is a psycopg2 cursor
    and parameters is an empty dict
    Federico Caselli
    @CaselIT
    strange, if cursor is a psycopg2 object sqlalchemy has no say in it
    btw I've just tried the same and I get no error: cur.execute('select 1', {}) where type(cur) is psycopg2.extensions.cursor