Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Georg Semmler
    @weiznich
    @alexxroche That sounds like you've used something that got interpreted as sqlite database url. Otherwise see diesel-rs/diesel#2646 for potential improvements for the error message. Contributions are welcome there.
    Alexx Roche
    @alexxroche
    But psql $(grep DATABASE_URL .env |awk -F'DATABASE_URL=' '{print $2}') connects just fine.
    diesel seems to be correctly collecting the database-url from .env Creating database: postgres://prwad:prwad@127.0.0.1:5432/prwad?sslmode=disablebut then does nothing with it.
    Georg Semmler
    @weiznich
    Are you sure that you compiled diesel_cli with postgres support enabled, because connecting to databases using postgres url's is something that we obviously test in our ci :wink:.
    Alexx Roche
    @alexxroche
    I'll run cargo install diesel_cli --no-default-features --features "postgres sqlite mysql" and let you know if you just became my hero of the day.
    Alexx Roche
    @alexxroche
    @weiznich Thank you! cargo was having some linker failure that was preventing it compiling in pg support.
    Now I need to track down why my system was throwing: Compiling diesel_cli v1.4.1 error: linking withccfailed: exit code: 1 (but I feel that isn't a diesel issue.)
    Georg Semmler
    @weiznich
    @alexxroche That means rustc cannot find a compatible version of libpq
    1 reply
    Wagner
    @mwnDK1402

    I'm running a complicated, raw SQL query using sql_query. I'll post it as a reply to this thread.

    It works when I'm running it locally. I'm running it in a Rocket unit test on Windows, connected to a local MySQL database.

    When I deploy it on AWS Lambda, it gives me an SQL error: Using unsupported buffer type: 245 (parameter: 1)
    When deployed, it connects to an AWS RDS MySQL database.

    I'm not sure how to start troubleshooting this, since the query does run on both MySQL servers if I substitute the ? parameter with a literal. I would like some help understanding what could cause this difference, and also understanding what the error means.

    The closest I could find was this: https://forums.mysql.com/read.php?168,242330,242330
    But that's a different error, with no responses.

    40 replies
    Cobalt
    @Chaostheorie
    Is there still support for custom types with enums? The example found in the docs is outdated and I somehow run into problems with Expression when I'm trying to do the same as JSONB. 'm pretty sure my implementation has one too many To/ FromSQl implementations:
    use diesel::*;
    use diesel::{
        deserialize::{FromSql, Result as DResult},
        pg::Pg,
        serialize::{IsNull, Output, Result as SResult, ToSql},
    };
    use serde::{Deserialize, Serialize};
    use std::io::Write;
    
    #[derive(Debug, Clone, Serialize, Deserialize, AsExpression, PartialEq)]
    #[diesel(not_sized)]
    pub struct ShortTextElement {
        pub name: String,
        pub value: Option<String>,
    }
    
    #[derive(Debug, Clone, Serialize, Deserialize, AsExpression, PartialEq)]
    #[diesel(not_sized)]
    pub struct TextElement {
        pub name: String,
    }
    
    #[derive(SqlType)]
    #[postgres(type_name = "Jsonb")]
    pub struct FormElement;
    
    #[derive(Debug, PartialEq, Serialize, Deserialize, FromSqlRow, AsExpression)]
    #[sql_type = "FormElement"]
    #[diesel(not_sized)]
    pub enum FormElementEnum {
        ShortTextElement(ShortTextElement),
        TextElement(TextElement),
    }
    
    /// Copied from diesel source for JSONB with slight variation with distinctive encoding
    impl ToSql<FormElement, Pg> for FormElementEnum {
        fn to_sql<W: Write>(&self, out: &mut Output<W, Pg>) -> SResult {
            out.write_all(&[2])?;
            serde_json::to_writer(out, self)
                .map(|_| IsNull::No)
                .map_err(Into::into)
        }
    }
    
    /// Copied from diesel source for JSONB with slight variation with distinctive encoding
    impl ToSql<FormElementEnum, Pg> for FormElementEnum {
        fn to_sql<W: Write>(&self, out: &mut Output<W, Pg>) -> SResult {
            out.write_all(&[2])?;
            serde_json::to_writer(out, self)
                .map(|_| IsNull::No)
                .map_err(Into::into)
        }
    }
    
    /// Copied from diesel source for JSONB with slight variation with distinctive encoding
    impl FromSql<FormElement, Pg> for FormElementEnum {
        fn from_sql(nullable: Option<&[u8]>) -> DResult<Self> {
            let bytes = not_none!(nullable);
            if bytes[0] != 2 {
                return Err("Unsupported JSONB encoding version".into());
            }
            serde_json::from_slice(&bytes[1..]).map_err(|_| "Invalid Json".into())
        }
    }
    
    /// Copied from diesel source for JSONB with slight variation with distinctive encoding
    impl FromSql<FormElementEnum, Pg> for FormElementEnum {
        fn from_sql(nullable: Option<&[u8]>) -> DResult<Self> {
            let bytes = not_none!(nullable);
            if bytes[0] != 2 {
                return Err("Unsupported JSONB encoding version".into());
            }
            serde_json::from_slice(&bytes[1..]).map_err(|_| "Invalid Json".into())
        }
    }
    While this code compiles it can't be used as a type of an attribute with Insertable. It seems like he deriving from AsExpression either doesn't properly implement Expression or I'm donig something wrong: ```rust
    2 replies
    Georg Semmler
    @weiznich
    First of all to which documentation do you refer? Documentation published on our website is tested via the CI system, therefore I'm quite sure it's not outdated.
    It's not really documentation but the closest thing I've found
    Georg Semmler
    @weiznich
    This is also tested via CI and I can assure you that it is not outdated. (More like: It's a test case that uses an unreleased future version of diesel.)
    Cobalt
    @Chaostheorie
    Ah okay. It's just that PgValue didn't exist in my version. Seems like this was my error
    Georg Semmler
    @weiznich
    Instead implementing FromSql/ToSql manually you should forward inside of the implementation to the existing impl for serde_json::Values, otherwise you need to reimplement much of postgres binary protocol parsing yourself.
    Cobalt
    @Chaostheorie
    But isn't that what I'm doing with serde_json::from/ to_reader?
    Georg Semmler
    @weiznich
    No, that's not the thing you are doing. You don't call from_sql/to_sql on serde_json::Value internally.
    Other than that: You are missing two #[sql_type = "blub"] annotations for your #[derive(AsExperssion)] attributes.
    Cobalt
    @Chaostheorie
    The annotations are added though How would I go about working wiht serde_json::Value without parsing twice? My intention was to have an enum that can be used as a type in the models that already has a specific structure. So that I can guarantee all elements in the DB have this type and aren't 'just' Values?
    Or is there no way other than evaluating after initially parsing for type checking?
    Georg Semmler
    @weiznich
    impl ToSql<FormElement, Pg> for FormElementEnum {
        fn to_sql<W: Write>(&self, out: &mut Output<W, Pg>) -> SResult {
            <serde_json::Value as ToSql<Jsonb, Pg>::to_sql(&serde_json::to_value(self), out)
        }
    }
    and similarly for the FromSql impl.
    That written if you only insert/load FormElementEnum you only need to implement FromSql/ToSql for this type.
    Cobalt
    @Chaostheorie
    Thank you. Now I understand what you mean. I'm kinda new to the rust type system and this is still kinda confusing. I will try to understand it more in depth. Your explanation is really appreciated
    Eduardo Colina
    @eacolina
    Hey! I'm trying to apply a migration for a Postgres DB but I'm getting an error: Unexpected null for non-null column
    This is the up file
    -- Your SQL goes here
    CREATE UNIQUE INDEX tx_hash_index
        ON sol_transaction(tx_hash);
    
    CREATE TABLE order_transaction(
        order_id VARCHAR(255) NOT NULL ,
        tx_hash VARCHAR(255) UNIQUE NOT NULL,
        PRIMARY KEY(order_id, tx_hash),
        CONSTRAINT tx_hash_fk FOREIGN KEY(tx_hash)
        REFERENCES sol_transaction(tx_hash)
        ON DELETE CASCADE
    );
    And the referenced table was created in a earlier migration with:
    -- Your SQL goes here
    CREATE TABLE sol_transaction(
        id BIGSERIAL PRIMARY KEY,
        tx_hash VARCHAR(255) NOT NULL,
        from_address VARCHAR(255) NOT NULL,
        to_address VARCHAR(255) NOT NULL,
        VALUE DOUBLE PRECISION NOT NULL DEFAULT 0,
        BLOCK_HASH VARCHAR(255) NOT NULL,
        BLOCK_HEIGHT BIGINT NOT NULL,
        BLOCK_TIME TIMESTAMP  NOT NULL DEFAULT CURRENT_TIMESTAMP
    )
    Any idea what could be the issue
    tx_hash is NOT NULL in both files
    Georg Semmler
    @weiznich
    @eacolina I cannot reproduce that with a local installation of postgres 13. Can you provide a reproducible example?
    (I guess it may depend on data stored in one of the tables)
    Jon Cahill
    @JonRCahill
    hi, I am reading up about defining and loading association and I think I understand everything however I am not sure how you would load the "belongs_to" defined on model, especially if you have a collection of them. So like if I have a Comment defined with a #[belongs_to(Author)] how can I load the Comment and Author at once or if I have a collection of Comments how can I load all the associated Authors?
    1 reply
    HarmoGlace
    @zaitara:matrix.org
    [m]
    Does diesel support ssl connection with postgresql, and how can I setup it?
    1 reply
    HarmoGlace
    @zaitara:matrix.org
    [m]
    I see, thank you
    first name last name
    @igitter_gitlab

    I would like to deal with a table schema where I have column for keys in the same table as in following:

    CREATE TABLE category (
    id INTEGER NOT NULL PRIMARY KEY,
    name VARCHAR NOT NULL UNIQUE,
    parent INTEGER,
    FOREIGN KEY (parent) REFERENCES category (id)
    );

    In the end I need a data structure like

    #[derive(Debug, Queryable, Serialize)]
    struct Category {
        pub id: i32,
        pub name: String,
        pub parent: Option<&Category>,
    }

    Can diesel do this for me or do I have to get the parent as i32 and map the tree myself?

    Rasmus Kaj
    @kaj:stacken.kth.se
    [m]
    That structure can be arbitrarily deep, so you won't be able to get it with a single query, so for the actual queries you will need to get parent as a i32.
    first name last name
    @igitter_gitlab
    @kaj:stacken.kth.se okay that makes sense, thank you!
    James Sewell
    @jamessewell
               connection
                    .build_transaction()
                    .run::<_, diesel::result::Error, _>(|| {
                        let results = diesel::insert_into(transitions::table)
                            .values(t)
                            .get_results(&connection)?;
                        diesel::insert_into(pending_campaigns::table)
                            .values(
                                results
                                    .iter()
                                    .map(|i:transitions::SqlType | PendingCampaigns {
                                        tid: i32::from_sql(Some(&i.0)).unwrap(),
                                        alert_id: String::from_sql(Some(&i.1)).unwrap()
                                    })
                                    .collect::<Vec<PendingCampaigns>>(),
                            )
                            .execute(&connection)?;            
                        Ok(())
                    })
                    .expect("Error saving transition");
    I for the life of me can't get this working
    I want to (in transaction) insert into transitions with a RETURNING clause, which I then grab two cols from and push into pending_campaigns as a vec
    Is there an easier way of doing this/
    James Sewell
    @jamessewell
    oh there is
    Georg Semmler
    @weiznich
    @jamessewell You need to give an explicit type to results. This can just be a vector of tuples with containing the corresponding values. Generally you don't need to explicitly call from_sql outside from implementing custom types.
    code-mm
    @code-mm

    Hey! I want to implement keyset pagination on an API. I'm struggling with implementing the below raw sql query in Diesel.
    The query should fetch the rows of the page before a certain id.

    SELECT * FROM (SELECT * FROM table WHERE id < 100 ORDER BY id DESC LIMIT 10) AS t ORDER BY id;

    Can you explain me how to implement the outer ORDER BY to get the rows of the page in the correct order? Thanks!

    Georg Semmler
    @weiznich
    That cannot be answered without knowing the structure of table.
    5 replies
    Otherwise just to make sure: You have seen this guide?
    2 replies
    Wenew Zhang
    @wenewzhang
    hi
    error[E0277]: the trait bound (chrono::DateTime<chrono::Utc>,): Queryable<diesel::sql_types::Nullable<diesel::sql_types::Timestamptz>, _> is not satisfied
    --> betpoold/src/models/round.rs:58:59
    |
    58 | let latest_time = rounds.select(max(create_time)).get_result::<RoundCreateTimeDTO>(conn).unwrap();
    | ^^^^^^^^^^ the trait Queryable<diesel::sql_types::Nullable<diesel::sql_types::Timestamptz>, _> is not implemented for (chrono::DateTime<chrono::Utc>,)
    |
    how to select the max(time) from table?