Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Georg Semmler
    @weiznich
    Other than that: You are missing two #[sql_type = "blub"] annotations for your #[derive(AsExperssion)] attributes.
    Cobalt
    @Chaostheorie
    The annotations are added though How would I go about working wiht serde_json::Value without parsing twice? My intention was to have an enum that can be used as a type in the models that already has a specific structure. So that I can guarantee all elements in the DB have this type and aren't 'just' Values?
    Or is there no way other than evaluating after initially parsing for type checking?
    Georg Semmler
    @weiznich
    impl ToSql<FormElement, Pg> for FormElementEnum {
        fn to_sql<W: Write>(&self, out: &mut Output<W, Pg>) -> SResult {
            <serde_json::Value as ToSql<Jsonb, Pg>::to_sql(&serde_json::to_value(self), out)
        }
    }
    and similarly for the FromSql impl.
    That written if you only insert/load FormElementEnum you only need to implement FromSql/ToSql for this type.
    Cobalt
    @Chaostheorie
    Thank you. Now I understand what you mean. I'm kinda new to the rust type system and this is still kinda confusing. I will try to understand it more in depth. Your explanation is really appreciated
    Eduardo Colina
    @eacolina
    Hey! I'm trying to apply a migration for a Postgres DB but I'm getting an error: Unexpected null for non-null column
    This is the up file
    -- Your SQL goes here
    CREATE UNIQUE INDEX tx_hash_index
        ON sol_transaction(tx_hash);
    
    CREATE TABLE order_transaction(
        order_id VARCHAR(255) NOT NULL ,
        tx_hash VARCHAR(255) UNIQUE NOT NULL,
        PRIMARY KEY(order_id, tx_hash),
        CONSTRAINT tx_hash_fk FOREIGN KEY(tx_hash)
        REFERENCES sol_transaction(tx_hash)
        ON DELETE CASCADE
    );
    And the referenced table was created in a earlier migration with:
    -- Your SQL goes here
    CREATE TABLE sol_transaction(
        id BIGSERIAL PRIMARY KEY,
        tx_hash VARCHAR(255) NOT NULL,
        from_address VARCHAR(255) NOT NULL,
        to_address VARCHAR(255) NOT NULL,
        VALUE DOUBLE PRECISION NOT NULL DEFAULT 0,
        BLOCK_HASH VARCHAR(255) NOT NULL,
        BLOCK_HEIGHT BIGINT NOT NULL,
        BLOCK_TIME TIMESTAMP  NOT NULL DEFAULT CURRENT_TIMESTAMP
    )
    Any idea what could be the issue
    tx_hash is NOT NULL in both files
    Georg Semmler
    @weiznich
    @eacolina I cannot reproduce that with a local installation of postgres 13. Can you provide a reproducible example?
    (I guess it may depend on data stored in one of the tables)
    Jon Cahill
    @JonRCahill
    hi, I am reading up about defining and loading association and I think I understand everything however I am not sure how you would load the "belongs_to" defined on model, especially if you have a collection of them. So like if I have a Comment defined with a #[belongs_to(Author)] how can I load the Comment and Author at once or if I have a collection of Comments how can I load all the associated Authors?
    1 reply
    HarmoGlace
    @zaitara:matrix.org
    [m]
    Does diesel support ssl connection with postgresql, and how can I setup it?
    1 reply
    HarmoGlace
    @zaitara:matrix.org
    [m]
    I see, thank you
    first name last name
    @igitter_gitlab

    I would like to deal with a table schema where I have column for keys in the same table as in following:

    CREATE TABLE category (
    id INTEGER NOT NULL PRIMARY KEY,
    name VARCHAR NOT NULL UNIQUE,
    parent INTEGER,
    FOREIGN KEY (parent) REFERENCES category (id)
    );

    In the end I need a data structure like

    #[derive(Debug, Queryable, Serialize)]
    struct Category {
        pub id: i32,
        pub name: String,
        pub parent: Option<&Category>,
    }

    Can diesel do this for me or do I have to get the parent as i32 and map the tree myself?

    Rasmus Kaj
    @kaj:stacken.kth.se
    [m]
    That structure can be arbitrarily deep, so you won't be able to get it with a single query, so for the actual queries you will need to get parent as a i32.
    first name last name
    @igitter_gitlab
    @kaj:stacken.kth.se okay that makes sense, thank you!
    James Sewell
    @jamessewell
               connection
                    .build_transaction()
                    .run::<_, diesel::result::Error, _>(|| {
                        let results = diesel::insert_into(transitions::table)
                            .values(t)
                            .get_results(&connection)?;
                        diesel::insert_into(pending_campaigns::table)
                            .values(
                                results
                                    .iter()
                                    .map(|i:transitions::SqlType | PendingCampaigns {
                                        tid: i32::from_sql(Some(&i.0)).unwrap(),
                                        alert_id: String::from_sql(Some(&i.1)).unwrap()
                                    })
                                    .collect::<Vec<PendingCampaigns>>(),
                            )
                            .execute(&connection)?;            
                        Ok(())
                    })
                    .expect("Error saving transition");
    I for the life of me can't get this working
    I want to (in transaction) insert into transitions with a RETURNING clause, which I then grab two cols from and push into pending_campaigns as a vec
    Is there an easier way of doing this/
    James Sewell
    @jamessewell
    oh there is
    Georg Semmler
    @weiznich
    @jamessewell You need to give an explicit type to results. This can just be a vector of tuples with containing the corresponding values. Generally you don't need to explicitly call from_sql outside from implementing custom types.
    code-mm
    @code-mm

    Hey! I want to implement keyset pagination on an API. I'm struggling with implementing the below raw sql query in Diesel.
    The query should fetch the rows of the page before a certain id.

    SELECT * FROM (SELECT * FROM table WHERE id < 100 ORDER BY id DESC LIMIT 10) AS t ORDER BY id;

    Can you explain me how to implement the outer ORDER BY to get the rows of the page in the correct order? Thanks!

    Georg Semmler
    @weiznich
    That cannot be answered without knowing the structure of table.
    5 replies
    Otherwise just to make sure: You have seen this guide?
    2 replies
    Wenew Zhang
    @wenewzhang
    hi
    error[E0277]: the trait bound (chrono::DateTime<chrono::Utc>,): Queryable<diesel::sql_types::Nullable<diesel::sql_types::Timestamptz>, _> is not satisfied
    --> betpoold/src/models/round.rs:58:59
    |
    58 | let latest_time = rounds.select(max(create_time)).get_result::<RoundCreateTimeDTO>(conn).unwrap();
    | ^^^^^^^^^^ the trait Queryable<diesel::sql_types::Nullable<diesel::sql_types::Timestamptz>, _> is not implemented for (chrono::DateTime<chrono::Utc>,)
    |
    how to select the max(time) from table?
    [dependencies.diesel]
    version = "1.4.6"
    features = ["postgres", "r2d2", "chrono"]
    code: rounds.select(max(create_time)).get_result::<RoundCreateTimeDTO>(conn).unwrap();
    Georg Semmler
    @weiznich
    @wenewzhang max returns a nullable expression. That means you need to wrap the corresponding field into an Option.
    Wenew Zhang
    @wenewzhang
    @weiznich thank you, i will try.
    do-nat
    @do-nat
    Hey all! I want to filter a table and check if the id (which is a uuid) is in a set of uuids. Is there an "where in" filter in diesel?
    do-nat
    @do-nat
    I just got it, it is eq_any :laughing: awesome!
    first name last name
    @igitter_gitlab
    Is there a way to execute several operations in one transaction block? To archive hat if one operation fail the whole transaction block can be rolled back?
    first name last name
    @igitter_gitlab
    thx!
    Dilawar Singh
    @dilawar
    image.png
    3 replies
    Can't read snippets on firefox/Desktop (Linux/OpenSUSE TW, Version 86.0.1)
    Snippets are fine on Firefox Nightly/Android.
    I disabled most plugins. I had to change 'color' value manually in css using developer tools.
    Tim Böttcher
    @TimBoettcher

    Hey,

    I'm trying to get started with diesel_cli on Windows, but it proves pretty difficult. I managed to get the installation to complete, but diesel setup fails:
    C:\Users\username\rust\diesel_demo> diesel setup --database-url postgres://username:password@localhost/diesel_demo
    Creating database: diesel_demo
    SCRAM authentication requires libpq version 10 or above

    Of course, I checked the libpq.dlls version number - it's v13. So I'm guessing diesel is getting libpq from somewhere else, though I have no idea where.
    Any suggestions?

    Georg Semmler
    @weiznich
    I'm not a windows user by myself, but I think the windows linker is searching for depended dlls in your PATH. That means it could be meaningful to check if there is any other version of libpq.dll somewhere there.
    二手掉包工程师
    @hi-rustin
    @weiznich https://github.com/diesel-rs/diesel/pull/2738#issuecomment-823915738 It seems the Windows CI broken because the Chocolatey's service 503.
    Georg Semmler
    @weiznich
    503 Means service no available, which is likely a temporary thing. I will just wait a bit and rerun the CI later on.
    二手掉包工程师
    @hi-rustin

    503 Means service no available, which is likely a temporary thing. I will just wait a bit and rerun the CI later on.

    Got it. Thanks!

    Tim Böttcher
    @TimBoettcher

    @weiznich Yeah, there was indeed another one in PATH. I removed it, and now diesel does nothing.
    I removed it:
    cargo uninstall diesel_cli

    cargo install diesel_cli --no-default-features --features postgres

    And now, when I type diesel, nothing happens. A linebreak gets inserted and the prompt appears again.

    Did anyone experience anything like that? Otherwise, I'm almost willing to just give up on diesel on Windows - it doesn't seem to work very well on that operating system... Although I did get output before- prior to removing the wrong libpq.dll from the PATH...