<class 'app.models.Cluster'> <class 'app.models.Cluster'> <class 'sqlalchemy.ext.declarative.api.DeclarativeMeta'> <class 'sqlalchemy.ext.declarative.api.DeclarativeMeta'> False <class 'app.models.Cluster'>- not sure why isinstance evaluates to False as the type and string representation are the same
sqlacodegen --generator dataclasses
Hi, I am trying to run a SQL raw query in ORM method. But the SQL Alchemy returns error and that query isn't I am expecting. Only issue with the
FROM part in the query, My raw query has only one table in
FROM keyword and other two tables are in
JOIN where as in sqlalchemy the
JOIN is okey but two tables coming with
SQL Alchemy query:
result = ( session.query(ProductionMaster.article, Barcode.size, sa.func.count(Barcode.size)) .join(Barcode, Barcode.serial_no == ProductionScan.serial_no) .join(ProductionMaster, ProductionMaster.prod_code == Barcode.prod_code) .filter( sa.and_( ProductionScan.date >= "2021-08-01 08:00:00", ProductionScan.date <= "2021-08-25 08:00:00", ProductionMaster.article.like("%3780%black%"), ) ) .group_by(ProductionMaster.article, Barcode.size) .all()
Error message I am getting:
The multi-part identifier "tbl_ProductionScan.serial_no" could not be bound. (4104) (SQLExecDirectW)')
returned QUERY with the error message:
SELECT [Tbl_ProductionMaster].[ProductionName] AS [Tbl_ProductionMaster_ProductionName], [Tbl_barcode].[Size] AS [Tbl_barcode_Size], count([Tbl_barcode].[Size]) AS count_1 FROM [tbl_ProductionScan], [Tbl_ProductionMaster] JOIN [Tbl_barcode] ON [Tbl_barcode].serial_no = [tbl_ProductionScan].serial_no JOIN [Tbl_ProductionMaster] ON [Tbl_ProductionMaster].[ProductionCode] = [Tbl_barcode].[Product_Code] WHERE [tbl_ProductionScan].prod_date >= ? AND [tbl_ProductionScan].prod_date <= ? AND [Tbl_ProductionMaster].[ProductionName] LIKE ? GROUP BY [Tbl_ProductionMaster].[ProductionName], [Tbl_barcode].[Size]
The actual query I am expecting: (This works fine)
SELECT [barcode].[dbo].[Tbl_ProductionMaster].[ProductionName] ,[barcode].[dbo].[Tbl_barcode].[Size] ,COUNT([barcode].[dbo].[Tbl_barcode].[Size]) FROM [barcode].[dbo].[tbl_ProductionScan] INNER JOIN [barcode].[dbo].[Tbl_barcode] ON [barcode].[dbo].[Tbl_barcode].[Serial_no] = [barcode].[dbo].[tbl_ProductionScan].[serial_no] INNER JOIN [barcode].[dbo].[Tbl_ProductionMaster] ON [barcode].[dbo].[Tbl_ProductionMaster].[ProductionCode] = [barcode].[dbo].[Tbl_barcode].[Product_code] WHERE [barcode].[dbo].[tbl_ProductionScan].[prod_date] BETWEEN '2021-08-01 08:00:00' AND '2021-08-25 08:00:00' AND [barcode].[dbo].[Tbl_ProductionMaster].[ProductionName] Like '%3780%black%' GROUP BY [barcode].[dbo].[Tbl_ProductionMaster].[ProductionName], [barcode].[dbo].[Tbl_barcode].[Size]
Hi everyone, i have a question about selecting only few columns from the Parent relationship,
I have this two tables:
tablename = 'parent'
id = Column(Integer, primary_key=True)
label = Column(String)
children = relationship("Child", back_populates="parent")
tablename = 'child'
id = Column(Integer, primary_key=True)
parent_id = Column(Integer, ForeignKey('public.parent.id'))
parent = relationship("Parent", back_populates="children")
when i use
i get the parent object with the list of Children as i was expecting, but what i would like to do is to select only few columns like that:
db.query( models.Parent.id, models.Parent.children)
Hi all. I have a column (postgresql 14) that I thought would update on every row update (1.4.27), declared as follows:
class utcnow(expression.FunctionElement): # pylint: disable=W0223 type = DateTime() ... updated_at = Column( DateTime(True), nullable=False, onupdate=utcnow(), server_default=utcnow(), index=True, )
This was what I came up with. I had the impression it works... sometimes. What am I missing?
I have a problem but I'm not sure whether it belongs here or with sqlalchemy. Once that determined I'll provide the required details.
Using sqlalchemy's PickleType I'm loading large files BLOBs into Oracle. Unfortunatelly they are too large and I get
DPI-1057: buffer size too large.
From oracle/python-cx_Oracle#480 I learned that streaming writes must be used in order to write LOBs larger than 1073741822B.
As far as I know cx-Oracle provides DB-API implementation that's used by sqlalchemy, but I'm uncertain on exactly where is the boundary.
This brings me to my question:
Does support for streaming writes (to support large LOBs) is something to be implemented in python-cx-Oracle or in sqlalchemy?
I think I'm stuck because of not understanding the difference between different abstractions:
My current understanding is that cx-Oracle is Python wrapper to Oracle Client libraries. It exposes a DBAPI (PEP249) compatibile interface
I'm unclear about "Oracle dialect" - https://docs.sqlalchemy.org/en/14/dialects/.
Is that a separate thing that sits between core and the DBAPI driver? Or it's part of either cx-Oracle or SQLAlchemy Core?
That sounds like a reasonable approach. We're using SQLAlchemy across the whole application, so it'd be nice to be able to interface is somehow.
Would it go something like this?
connection = engine.raw_connection() cursor = connection.cursor()
And then that's the same cursor & connection they are using here:
id_val = 9 lob_var = cursor.var(cx_Oracle.DB_TYPE_BLOB) cursor.execute(""" insert into lob_tbl (id, b) values (:1, empty_blob()) returning b into :2""", [id_val, lob_var]) blob, = lobVar.getvalue() offset = 1 num_bytes_in_chunk = 65536 with open("image.png", "rb") as f: while True: data = f.read(num_bytes_in_chunk) if data: blob.write(data, offset) if len(data) < num_bytes_in_chunk: break offset += len(data) connection.commit()
class Box(BaseModel): __tablename__ = "box" id = Column(Integer, primary_key=True) category = Column(String) class Folder(BaseModel): __tablename__ = "folder" id = Column(Integer, primary_key=True) box_id = Column(Integer, ForeignKey('box.id')) box = relationship(Box, backref=backref("folders")) category = ??? # In particular, I don't want this column to ACTUALLY exist in this table # I'd like: session.query(Folder).filter(Folder.category == "Mid-17th-Century Pocket Lint Collectors") # to implicitly result in SQL with the effect of: SELECT * FROM folder, box WHERE folder.box_id = box.id and box.category = 'Mid-17th-Century Pocket Lint Collectors'
category = column_property(box.category)should work, but apparently the box field isn't populated yet at that point, because I get an AttributeError on
@CaselIT: So using association proxy with my earlier example, like
class Folder(BaseModel): ... box_id = Column(Integer, ForeignKey('box.id')) box = relationship(Box, backref=backref("folders")) category = association_proxy("box", "category")
Though referencing it works in some contexts (like
.filter(Folder.category == "x"), it doesn't transparently work in other field contexts. Specifically:
session.query(Folder.category).all() -> InvalidRequestError: SQL expression, column, or mapped entity expected - got 'ColumnAssociationProxyInstance(AssociationProxy('box', 'category'))'
Any way to make that work? What I want is for callers not to have to know that this isn't a real column.
SELECT associated_table.associated_col_name AS col_name ...?