feel free to hang here, note that some of the "chat" happens on libera.chat #sqlalchemy
SELECT associated_table.associated_col_name AS col_name ...
?
select
specifically - I just meant the SELECT expressions in the compiled SQL resulting from session.query(Model.an_association_proxy_attr)
.
column_property()
is that in my actual usage, I have like a dozen such mapped fields, and the result is that the generated query has like a dozen subselects to the same related table. I would kinda like the eager loading, but not at that expense. :P
association_proxy
doesn't work with order_by()
, either. While using it in select expressions was a nice-to-have, order_by
actually is a show-stopper for me. sigh
@CaselIT: So I came up with this:
def association_hybrid_proxy(target: str, attr: str):
prop = hybrid_property(lambda self: getattr(getattr(self, target), attr))
return prop.expression(lambda cls: getattr(getattr(cls, target).mapper.class_, attr))
You use it just like association_proxy
, but it makes a hybrid_property
under the covers that works in both filter()
and order_by()
(the latter requires that you ensure the join in some other way, so it's not 100% transparent, but at least that's easy and immediately discovered if you don't).
declared_attr
. I'll get around to it in a bit and know for sure.
:point_up: Edit: @CaselIT: So I came up with this:
def hybrid_association_proxy(target: str, attr: str):
prop = hybrid_property(lambda self: getattr(getattr(self, target), attr))
return prop.expression(lambda cls: getattr(getattr(cls, target).mapper.class_, attr))
You use it just like association_proxy
, but it makes a hybrid_property
under the covers that works in both filter()
and order_by()
(the latter requires that you ensure the join in some other way, so it's not 100% transparent, but at least that's easy and immediately discovered if you don't).
import pandas as pd
import snowflake.connector as snow
from snowflake.connector.pandas_tools import write_pandas
import logging
SNOWFLAKE_ACCOUNT = 'jwa13025' # os.getenv('SF_ACOUNT') SF_ACOUNT
SNOWFLAKE_USER = 'nkumar' # os.getenv('SF_USR') SF_USR
SNOWFLAKE_PWD = 'Kuldeep@12' # os.getenv('SF_PWD') SF_PWD'
conn = snow.connect(user=SNOWFLAKE_USER,
password=SNOWFLAKE_PWD ,
account=SNOWFLAKE_ACCOUNT,
warehouse="WORLDPANTRYDW",
database="SDS",
schema="PUBLIC");
try:
logging.basicConfig(level=logging.INFO, filename='Load_SDS_Header.log', filemode='w', format='%(asctime)s %(message)s',datefmt='%m/%d/%Y %I:%M:%S %p')
Result=conn.cursor().execute("use schema PUBLIC")
print(Result)
SDS_HEADERS = pd.read_csv("C:\BalaReports\SDS\Oracle_SDS_Datafiles_Nov21\Headers.txt",sep="\t",header=None,names=["Column_Id","Header_ColId","Header_ColDescription"])
success, nchunks, nrows, _ = write_pandas(conn, SDS_HEADERS, "SDS.PUBLIC.SDS_HEADERS",schema="PUBLIC")
print(success)
#raise e.errno
logging.error("Exception occurred", exc_info=True)
#raise e.errno
#print('SQL Execution Error: {0}'.format(e.msg))
#print('Snowflake Query Id: {0}'.format(e.sfqid))
#print('Error Number: {0}'.format(e.errno))
#print('SQL State: {0}'.format(e.sqlstate))
finally:
conn.close()
logging.info('Trace Other Issue', exc_info=True)
"'''"import pandas as pd"'''"
"'''"import snowflake.connector as snow "'''"
"'''"from snowflake.connector.pandas_tools import write_pandas"'''"
"'''"import logging"'''"
"'''"conn = snow.connect(user=SNOWFLAKE_USER,"'''"
"'''" password=SNOWFLAKE_PWD ,"'''"
"'''" account=SNOWFLAKE_ACCOUNT,"'''"
"'''" warehouse="WORLDPANTRYDW", "'''"
"'''" database="SDS", "'''"
"'''" schema="PUBLIC"); "'''"
"'''"try: "'''"
"'''" logging.basicConfig(level=logging.INFO, filename='Load_SDS_Header.log', filemode='w', format='%(asctime)s %(message)s',datefmt='%m/%d/%Y %I:%M:%S %p') "'''"
"'''" Result=conn.cursor().execute("use schema PUBLIC") "'''"
"'''" print(Result) "'''"
"'''" SDS_HEADERS = pd.read_csv("C:\BalaReports\SDS\Oracle_SDS_Datafiles_Nov21\Headers.txt",sep="\t",header=None,names=["Column_Id","Header_ColId","Header_ColDescription"]) "'''"
"'''" #print(SDSHEADERS) "'''"
"'''" success, nchunks, nrows, = write_pandas(conn, SDS_HEADERS, "SDS.PUBLIC.SDS_HEADERS",schema="PUBLIC") "'''"
"'''" print(success) "'''"
"'''"#except conn.error.DatabaseError as e: "'''"
"'''" #raise e.errno "'''"
"'''" logging.error("Exception occurred", exc_info=True) "'''"
"'''"#except exception as e: "'''"
"'''" #raise e.errno "'''"
"'''" #print('SQL Execution Error: {0}'.format(e.msg)) "'''"
"'''" #print('Snowflake Query Id: {0}'.format(e.sfqid)) "'''"
"'''" #print('Error Number: {0}'.format(e.errno)) "'''"
"'''" #print('SQL State: {0}'.format(e.sqlstate)) "'''"
"'''"finally: "'''"
"'''" conn.close() "'''"
"'''" logging.info('Trace Other Issue', exc_info=True) "'''"
Hi all :wave:
How do you manage merging relationships properly? By default SQLAlchemy is trying to add related objects to the session instead of merging them.
Here is a short example:
for track_name in network.get_tracks():
track = Track(name=track_name)
track.load_tags() # Here relationships are filled with Tag objects
session.merge(track)
session.commit()
This will fail if one track previously loaded its tags, SQLAlchemy will try to insert the existing tag again.
load_tags
look likedef load_tags():
for tag_name in self.network.get_tags(track=self.name):
tag = Tag(name=tag_name)
self.tags.append(tag)
Hello, I'm tring to create an Engine using FILEDSN, which is a file path for a dsn file.
I use pyodbc to connect to a sqlserver. I succesfully connected by using
pyodbc.connect("FILEDSN=/path/to/dsn/file")
but it seems not work on create_engine, I tried to use
test_url = URL.create("mssql+pyodbc", query={"odbc_connect": "FILEDSN=/path/to/dsn/file"})
engine = create_engine(test_url)
creator
function for that: https://docs.sqlalchemy.org/en/14/core/engines.html?highlight=create%20engine#sqlalchemy.create_engine.params.creator