Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Federico Caselli
    @CaselIT
    that seems indeed strange. do you have the same issue by using asyncpg directly?
    feel tree to open a discussion or issue
    2 replies
    Anton Melser
    @AntonOfTheWoods
    There is some slight strangeness with asyncpg directly also... I am seeing this on my "production" platform, which was trying to use db replication with pgpool. I reduced the number of replicas to 1 (like my preprod) to try and debug but some strangeness continues. The issue with preprod is that all services are on the same machine, whereas the prod are on different VMs over a flakey (Contabo) network (though pgpool and the db instance are/were on the same machine).
    Alex Grönholm
    @agronholm
    @AntonOfTheWoods does it happen even w/o pgpool?
    2 replies
    @keirono:matrix.keiron.xyz
    @keirono:matrix.keiron.xyz
    [m]
    Hi everyone, I'm sorry about this but I badly need some assistance. I've been working on this problem all day and I'm completely lost. I led the development of a project, and the new ownership has done something that causes alembic to fail create a database. I've used GitHub actions to reproduce the error here: https://github.com/AberystwythSystemsBiology/limbus/runs/4156561399?check_suite_focus=true
    I am at a complete loss as to how to fix this.
    Jonathan Vanasco
    @jvanasco
    Can you point to the alembic migrations and how they're executed? Can you reference a previously working version?
    Lorenzo Rutayisire
    @loryruta
    Hi, I'm getting really confused with sessions: I have a table called "operations", then I have a worker that takes 100 operations, executes them and sets a field processed_at, then takes the next 100 operations. The code is something like this:
    with Session.begin() as session:
         while has_remaining_operations():
             operations = session.query(Operation) \
                            .filter(
                                Operation.id_user == user.id and
                                Operation.processed_at is None
                            ) \
                            .order_by(Operation.created_at.asc()) \
                            .limit(100) \
                            .all()
    
             for op in operations:
                  # code to execute
                  op.processed_at = datetime.now().isoformat()
    Here's the pseudo-code of what I'm doing
    The issue is the same operations are executed more than once, like the processed_at modification doesn't get applied when session.query is executed
    Lorenzo Rutayisire
    @loryruta
    maybe should I commit before re-doing the query? but if an error is found in a single operation I'd like to reverse the whole process...
    mikenerone
    @mikenerone:matrix.org
    [m]
    @loryruta: I'm no expert, and I'm half shooting in the dark, but two approaches come to mind:
    1. Don't repeat the query - have just one query and fetch the results in chunked fashion (https://docs.sqlalchemy.org/en/14/orm/query.html#sqlalchemy.orm.Query.yield_per).
    2. Since you're ordered by Operation.created_at, grab the last one from each chunk and add a WHERE Operation.created_at > last_created_at_from_previous_chunk.
    Lorenzo Rutayisire
    @loryruta
    I would firstly like to understand why my solution isn't working before trying workarounds...
    Jonathan Vanasco
    @jvanasco
    call session.flush() at the end of the loop
    Lorenzo Rutayisire
    @loryruta
    session.flush(objects=[op]) soon after op.processed_at = ... right?
    Jonathan Vanasco
    @jvanasco
    In your example, you're modifying the objects in Python but not updating the database
    Lorenzo Rutayisire
    @loryruta
    I've already tried adding flush after setting processed_at and I was still getting the same result...
    Jonathan Vanasco
    @jvanasco
    session.flush() should work
    at the same indent leve as the for loop, right after it
    level
    Lorenzo Rutayisire
    @loryruta
    I found the actual issue...
    yeah, flush is required however it continued giving me wrong results
    the issue was the filter clause. It should have looked like this:
                            .filter(and_(
                                SavedOp.id_user == user.id,
                                SavedOp.processed_at.is_(None)
                            )) \
    Alex Grönholm
    @agronholm
    that would not be required if you used a single chunking query methinks
    Jonathan Vanasco
    @jvanasco
    I didn't catch that. You can also do SavedOp.processed_at.op("is")(None)
    but is_(None) is the correct form
    Lorenzo Rutayisire
    @loryruta
    nope, I can't chunk
    Saltymakov Timofey
    @Actticus
    Hi. How can I do async bulk update by one statement? Can't find any information about it in docs
    await session.execute(
                    sqlalchemy.update(
                        some_table
                    ).where(
                        some_table.id.in_(ids)
                    ).values(
                        [{'a':.., ...}, {'a':.., ...},...]
                    ),
                )
    2 replies
    I'm tried something like that but catch UPDATE construct does not support multiple parameter sets.
    Alex Grönholm
    @agronholm
    is that from sqlalchemy itself or the rdbms?
    Alex Grönholm
    @agronholm
    why am I getting TypeError: dict is not a sequence when running engine.execute(...)? I'm not even passing it any parameters, just a literal SQL UPDATE statement
    it's crashing at cursor.execute(statement, parameters) where cursor is a psycopg2 cursor
    and parameters is an empty dict
    Federico Caselli
    @CaselIT
    strange, if cursor is a psycopg2 object sqlalchemy has no say in it
    btw I've just tried the same and I get no error: cur.execute('select 1', {}) where type(cur) is psycopg2.extensions.cursor
    Alex Grönholm
    @agronholm
    maybe it has to do with the contents of the query text
    GauravPandey-NECI
    @Gauravp-NEC
    Hi, How I can list the process of sqlalchemy?
    Federico Caselli
    @CaselIT
    what do you mean? sqlalchemy by its own does not have any process
    Alex Grönholm
    @agronholm
    perhaps that is a reference to PostgreSQL where client connections are shown as "processes" connecting to the server
    GauravPandey-NECI
    @Gauravp-NEC
    Hi, How we can check the "os.fork" have implemented in SQLalchemy source code?
    How we can check the "os.fork" functionality is used in SQLalchemy??
    Alex Grönholm
    @agronholm
    @Gauravp-NEC are you trying to ask if os.fork() is used in sqlalchemy, or what
    4 replies
    Varsha Ramesh
    @varsharmsh

    Hi,
    We are trying to use SQLAlchemy compile to generate queries in different dialects. It is somehow not working for complex queries.
    QUERY
    SELECT partsupp.ps_supplycost, region.r_regionkey FROM region JOIN partsupp ON
    (partsupp.ps_partkey 698) < region.r_regionkey

    SELECT OBJECT

    <class 'sqlalchemy.sql.selectable.Select'>
    SELECT partsupp.ps_supplycost, region.r_regionkey
    FROM region JOIN partsupp ON
    (partsupp.ps_partkey + %(ps_partkey_1)s) < region.r_regionkey

    USAGE

    Federico Caselli
    @CaselIT
    I'm not sure what is the issue @varsharmsh
    2 replies
    Cyril Margorin
    @comargo
    Hello there.
    I've got a noob question.
    https://gist.github.com/comargo/0ca997e45ce2d59ca456f556c6d5a07d
    How can I make relationship(..., backref="...") populate parent class before using it?
    but you should probably use back_populates
    Cyril Margorin
    @comargo
    @CaselIT Thanks!!! the configure_mappers() was exactly what I looked for, but why back_populates is better then backref? It requires me to write the same thing twice....
    Federico Caselli
    @CaselIT
    you write the other side of the relationshinp
    in the 99% or the cases it's better since it's explicit and it better to reason about once you have to re-read old code and you don't remember who defined that backref