feel free to hang here, note that some of the "chat" happens on libera.chat #sqlalchemy
"'''"import pandas as pd"'''"
"'''"import snowflake.connector as snow "'''"
"'''"from snowflake.connector.pandas_tools import write_pandas"'''"
"'''"import logging"'''"
"'''"conn = snow.connect(user=SNOWFLAKE_USER,"'''"
"'''" password=SNOWFLAKE_PWD ,"'''"
"'''" account=SNOWFLAKE_ACCOUNT,"'''"
"'''" warehouse="WORLDPANTRYDW", "'''"
"'''" database="SDS", "'''"
"'''" schema="PUBLIC"); "'''"
"'''"try: "'''"
"'''" logging.basicConfig(level=logging.INFO, filename='Load_SDS_Header.log', filemode='w', format='%(asctime)s %(message)s',datefmt='%m/%d/%Y %I:%M:%S %p') "'''"
"'''" Result=conn.cursor().execute("use schema PUBLIC") "'''"
"'''" print(Result) "'''"
"'''" SDS_HEADERS = pd.read_csv("C:\BalaReports\SDS\Oracle_SDS_Datafiles_Nov21\Headers.txt",sep="\t",header=None,names=["Column_Id","Header_ColId","Header_ColDescription"]) "'''"
"'''" #print(SDSHEADERS) "'''"
"'''" success, nchunks, nrows, = write_pandas(conn, SDS_HEADERS, "SDS.PUBLIC.SDS_HEADERS",schema="PUBLIC") "'''"
"'''" print(success) "'''"
"'''"#except conn.error.DatabaseError as e: "'''"
"'''" #raise e.errno "'''"
"'''" logging.error("Exception occurred", exc_info=True) "'''"
"'''"#except exception as e: "'''"
"'''" #raise e.errno "'''"
"'''" #print('SQL Execution Error: {0}'.format(e.msg)) "'''"
"'''" #print('Snowflake Query Id: {0}'.format(e.sfqid)) "'''"
"'''" #print('Error Number: {0}'.format(e.errno)) "'''"
"'''" #print('SQL State: {0}'.format(e.sqlstate)) "'''"
"'''"finally: "'''"
"'''" conn.close() "'''"
"'''" logging.info('Trace Other Issue', exc_info=True) "'''"
Hi all :wave:
How do you manage merging relationships properly? By default SQLAlchemy is trying to add related objects to the session instead of merging them.
Here is a short example:
for track_name in network.get_tracks():
track = Track(name=track_name)
track.load_tags() # Here relationships are filled with Tag objects
session.merge(track)
session.commit()
This will fail if one track previously loaded its tags, SQLAlchemy will try to insert the existing tag again.
load_tags
look likedef load_tags():
for tag_name in self.network.get_tags(track=self.name):
tag = Tag(name=tag_name)
self.tags.append(tag)
Hello, I'm tring to create an Engine using FILEDSN, which is a file path for a dsn file.
I use pyodbc to connect to a sqlserver. I succesfully connected by using
pyodbc.connect("FILEDSN=/path/to/dsn/file")
but it seems not work on create_engine, I tried to use
test_url = URL.create("mssql+pyodbc", query={"odbc_connect": "FILEDSN=/path/to/dsn/file"})
engine = create_engine(test_url)
creator
function for that: https://docs.sqlalchemy.org/en/14/core/engines.html?highlight=create%20engine#sqlalchemy.create_engine.params.creator
Looks like I don't use session properly. Not sure what am I missing 🤔
When fetching existing data from the database and then updating, looks like SQL Alchemy doesn't recognize already stored objects that are part of relationship and tries to insert them again.
""" Here I generate data and merge it into the database """
for track_name in ['track-one','track-two','track-three']:
track = Track(name=track_name)
track.load_top_tags()
session.merge(track)
session.commit()
""" Clean session """
session.expunge_all()
""" Here I fetch stored tracks and update their related tag objects (doesnt work) """
for track in session.query(Track).all():
for top_tag in track.top_tags: # Load tags from database
print(f"{track.name} has top tag {top_tag.tag.name}")
track.load_top_tags()
session.merge(track)
session.commit()
This results in (sqlite3.IntegrityError) UNIQUE constraint failed: tags.id
error because SQLAlchemy is trying to insert data that already exists
Here is the full example with resulting output: https://colab.research.google.com/drive/18OMhYNxZGnKU4gaQ8l-ObKmVQEf50wtp?usp=sharing
my_instance in set(fetched_objs_from_table)
), but that doesn't seem to work. Or I could try to insert each object and catch IntegrityError
when the unique-key constraint fails, but that is probably bad for performance.
Thanks Federico, I will check that out.
What I am using now as workaround is a class decorator which adds an equality check (i.e., overriding __eq__
and __ne__
of the objects) similar to the one described here: https://stackoverflow.com/questions/39043003/comparing-sqlalchemy-object-instances-for-equality-of-attributes. The difference is my decorator takes the names of the attributes to compare as arguments, instead of checking all. (Unfortunately, at decoration time the __table_args__
are not populated yet, otherwise I could use the attributes used in the UniqueConstraint
of the table for the comparison).
With this I can do:
# Too bad those are in a list and can't be put in a set
foos_existing = session.execute(sqlalchemy.select(Foo)).scalars().all()
for foo in foos_new:
# Fancy equality check happening here
if foo not in foos_existing:
session.add(foo)
session.commit()
Issues:
set()
.Foo
records get inserted to the table while this code runs, the insert might insert a record which already exists (i.e., has just been added). So I might have to wrap this in a transaction.Any ideas to improve this overall?