Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Nirmal Kumar Kosuru
    @nikkijyos:matrix.org
    [m]
    I have all the objects required for loading
    Can any one let me know whats the Issue?
    Alex Grönholm
    @agronholm
    @nikkijyos:matrix.org would you mind properly formatting that code above?
    10 replies
    Nirmal Kumar Kosuru
    @nikkijyos:matrix.org
    [m]
    I am trying to load this file
    Nirmal Kumar Kosuru
    @nikkijyos:matrix.org
    [m]

    "'''"import pandas as pd"'''"
    "'''"import snowflake.connector as snow "'''"
    "'''"from snowflake.connector.pandas_tools import write_pandas"'''"
    "'''"import logging"'''"

    "'''"conn = snow.connect(user=SNOWFLAKE_USER,"'''"
    "'''" password=SNOWFLAKE_PWD ,"'''"
    "'''" account=SNOWFLAKE_ACCOUNT,"'''"
    "'''" warehouse="WORLDPANTRYDW", "'''"
    "'''" database="SDS", "'''"
    "'''" schema="PUBLIC"); "'''"

    "'''"try: "'''"
    "'''" logging.basicConfig(level=logging.INFO, filename='Load_SDS_Header.log', filemode='w', format='%(asctime)s %(message)s',datefmt='%m/%d/%Y %I:%M:%S %p') "'''"
    "'''" Result=conn.cursor().execute("use schema PUBLIC") "'''"
    "'''" print(Result) "'''"
    "'''" SDS_HEADERS = pd.read_csv("C:\BalaReports\SDS\Oracle_SDS_Datafiles_Nov21\Headers.txt",sep="\t",header=None,names=["Column_Id","Header_ColId","Header_ColDescription"]) "'''"
    "'''" #print(SDSHEADERS) "'''"
    "'''" success, nchunks, nrows,
    = write_pandas(conn, SDS_HEADERS, "SDS.PUBLIC.SDS_HEADERS",schema="PUBLIC") "'''"
    "'''" print(success) "'''"
    "'''"#except conn.error.DatabaseError as e: "'''"
    "'''" #raise e.errno "'''"
    "'''" logging.error("Exception occurred", exc_info=True) "'''"
    "'''"#except exception as e: "'''"
    "'''" #raise e.errno "'''"
    "'''" #print('SQL Execution Error: {0}'.format(e.msg)) "'''"
    "'''" #print('Snowflake Query Id: {0}'.format(e.sfqid)) "'''"
    "'''" #print('Error Number: {0}'.format(e.errno)) "'''"
    "'''" #print('SQL State: {0}'.format(e.sqlstate)) "'''"
    "'''"finally: "'''"
    "'''" conn.close() "'''"
    "'''" logging.info('Trace Other Issue', exc_info=True) "'''"

    Nirmal Kumar Kosuru
    @nikkijyos:matrix.org
    [m]
    I have attached the file I am trying to load
    Any help is much Appreciated.
    Federico Caselli
    @CaselIT

    snowflake.connector.errors.ProgrammingError: 002043 (02000): 01a0b3c8-0500-e563-0029-ec830066c0ba: SQL compilation error:
    Object does not exist, or operation cannot be performed.

    it seems the table is missing. just guessing, I've never used snowflake

    4 replies
    Alex Grönholm
    @agronholm
    @nikkijyos:matrix.org when I told you how to fix your formatting, this was not it. I meant to add ``` (three backticks) before the snippet and after it. You used single quotes instead of backticks, with the double quotes (which I never meant to be included there) plus you put them on every line
    4 replies
    Nirmal Kumar Kosuru
    @nikkijyos:matrix.org
    [m]
    Tables inside the database
    Nirmal Kumar Kosuru
    @nikkijyos:matrix.org
    [m]
    I have one question here, the WRITE_PANDAS create put and copy command during this process it stores the csv file into stage so do I need to give any permission for creating stage like grant usage, read, write on future stages in schema "PUBLIC" to role "SYSADMIN"
    Leonid Komarovsky
    @shpoont

    Hi all :wave:
    How do you manage merging relationships properly? By default SQLAlchemy is trying to add related objects to the session instead of merging them.
    Here is a short example:

    for track_name in network.get_tracks():
        track = Track(name=track_name)
        track.load_tags() # Here relationships are filled with Tag objects
        session.merge(track)
        session.commit()

    This will fail if one track previously loaded its tags, SQLAlchemy will try to insert the existing tag again.

    This is how load_tags look like
    def load_tags():
        for tag_name in self.network.get_tags(track=self.name):
            tag = Tag(name=tag_name)
            self.tags.append(tag)
    Leonid Komarovsky
    @shpoont
    I assume that with default cascade having merge it should propagate merge to tags as well, but it doesn't work in my case
    Federico Caselli
    @CaselIT
    is name the pk? IIRC merge works on pk only
    4 replies
    corporal9736
    @corporal9736

    Hello, I'm tring to create an Engine using FILEDSN, which is a file path for a dsn file.
    I use pyodbc to connect to a sqlserver. I succesfully connected by using

    pyodbc.connect("FILEDSN=/path/to/dsn/file")

    but it seems not work on create_engine, I tried to use

    test_url = URL.create("mssql+pyodbc", query={"odbc_connect": "FILEDSN=/path/to/dsn/file"})
    engine = create_engine(test_url)
    but it doesn't work. I wonder if there are proper ways to do that.
    Federico Caselli
    @CaselIT
    corporal9736
    @corporal9736
    it doesn't work... I found that it seems to force using unixodbc... even if I installed msodbc and specific it in dsn file. I'm using ubuntu20.04 and sqlalchemy 1.4.27.
    Michael Bourke
    @bourke
    you can customise how the connection gets created by parsing and manipulating the URL args: https://docs.sqlalchemy.org/en/14/core/engines.html#custom-dbapi-connect-arguments-on-connect-routines
    corporal9736
    @corporal9736
    I've sloved this problem. Thanks a lot! However, why do we still need a url when I pass the pool or creator param to create_engine? Because of the standard?
    Michael Bourke
    @bourke
    that would be my assumption, yah
    Federico Caselli
    @CaselIT

    I've sloved this problem. Thanks a lot! However, why do we still need a url when I pass the pool or creator param to create_engine? Because of the standard?

    that's needed to load the proper dialect by sqlalchemy

    Boris Knop
    @boris.knop_gitlab
    Hi Guys, im using the most recent stable SqlAlchemy version, i keep on getting the following error : "cannot import name 'DefaultEngineStrategy' from 'sqlalchemy.engine.strategies'"
    in one of the forums, it says i should downgrade, but any version prior to 1.4 doesnt have the Asyncsession which i need... so i kinda stuck here. any suggestions?
    2 replies
    Ian Wilson
    @ianjosephwilson
    hi, I think I've confused myself, i'm using 1.4.21. Does appending to a relation trigger an insert? (And likewise a remove()?) Like user.shipping_addresses(ShippingAddress(address=user.default_address)). I'm using a concrete association object.
    Or do I need to ALSO call session.add(shipping_address) where shipping_address=ShippingAddress(address=user.default_address).
    Federico Caselli
    @CaselIT
    if you append to a relationship that's not viewonly=True it should be added to db without the need for add
    Ian Wilson
    @ianjosephwilson
    It seem that if I append the child to the parent but also link the child to the parent then it seems to add two entries.
    My naming scheme is confusing but here are actual snippets from the code, I think maybe I have something misconfigured, https://gist.github.com/ianjosephwilson/ed8384661f3e3ec25164b98244ed50ee
    If I comment out that album link it seems to work.
    Federico Caselli
    @CaselIT
    Christian Klinger
    @goschtl
    Hey i have a question i have a postgres DATERANGE column. I want to query only the year in the dateobject
    i tried this

    sqlalchemy.extract('upper.year', Tab_MGLBetreuer.gueltigkeit) >= jahr.year

    jahr is a datetime.date object
    gueltigkeit is the DATERANGE field
    Sergey Maranchuk
    @slav0nic
    is it possible force commit dirty but not modified object? (for example if do user.name = 'Bob' and old value was also 'Bob' sqlalchemy don't generate UPDATE statement)
    svaranasi-night-market
    @svaranasi-night-market
    Hi All, can someone help me with sqlalchemy function for update with limit
    I am using mysql
    Luis Roel
    @luisroel91
    Has anyone used SQLAlchemy with FactoryBoy?...I'm having an issue where a generated factory fails to save objects because they violate not-null constraints. Do I have to manually define each field in the factory?
    9 replies
    Luis Roel
    @luisroel91
    Anyone know if factory boy works with SA's automap_base?
    Leonid Komarovsky
    @shpoont

    Looks like I don't use session properly. Not sure what am I missing 🤔
    When fetching existing data from the database and then updating, looks like SQL Alchemy doesn't recognize already stored objects that are part of relationship and tries to insert them again.

    """ Here I generate data and merge it into the database """
    for track_name in ['track-one','track-two','track-three']:
        track = Track(name=track_name)
        track.load_top_tags()
        session.merge(track)
        session.commit()
    
    """ Clean session """
    session.expunge_all()
    
    """ Here I fetch stored tracks and update their related tag objects (doesnt work) """
    for track in session.query(Track).all():
        for top_tag in track.top_tags: # Load tags from database
            print(f"{track.name} has top tag {top_tag.tag.name}")
        track.load_top_tags()
        session.merge(track)
        session.commit()

    This results in (sqlite3.IntegrityError) UNIQUE constraint failed: tags.id error because SQLAlchemy is trying to insert data that already exists

    Here is the full example with resulting output: https://colab.research.google.com/drive/18OMhYNxZGnKU4gaQ8l-ObKmVQEf50wtp?usp=sharing

    5 replies
    Shredder
    @Shredder
    I have a table with an auto-increment integer id (primary key), and a unique-key constraint for three other columns. In my code, I am creating many instances of the mapped class and would like to insert them in the table, but only if they don't exist already. What is the best way to do this check for existence? I obviously can't check the primary key, so I could do a query for each object I want to insert to check whether an object with the unique-key values already exists -- this seems somewhat clunky. I was hoping to pre-fetch all objects from the table, then do a direct comparison (my_instance in set(fetched_objs_from_table)), but that doesn't seem to work. Or I could try to insert each object and catch IntegrityError when the unique-key constraint fails, but that is probably bad for performance.